Nov 25 14:52:46 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 14:52:46 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:46 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:52:47 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 14:52:47 crc kubenswrapper[4806]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.728183 4806 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733642 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733660 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733666 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733670 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733674 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733679 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733683 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733687 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733691 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733694 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733698 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733702 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733706 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733711 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733716 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733721 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733725 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733730 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733735 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733739 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733750 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733754 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733758 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733762 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733766 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733770 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733773 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733777 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733780 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733784 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733787 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733790 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733794 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733798 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733801 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733805 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733809 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733813 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733816 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733820 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733823 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733827 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733830 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733834 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733837 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733841 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733844 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733847 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733852 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733856 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733860 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733865 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733869 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733873 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733878 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733882 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733886 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733890 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733894 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733898 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733902 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733907 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733912 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733916 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733920 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733923 4806 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733927 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733930 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733934 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733938 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.733941 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735786 4806 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735821 4806 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735834 4806 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735840 4806 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735846 4806 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735851 4806 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735858 4806 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735863 4806 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735867 4806 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735872 4806 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735877 4806 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735881 4806 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735885 4806 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735890 4806 flags.go:64] FLAG: --cgroup-root="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735894 4806 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735898 4806 flags.go:64] FLAG: --client-ca-file="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735903 4806 flags.go:64] FLAG: --cloud-config="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735907 4806 flags.go:64] FLAG: --cloud-provider="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735911 4806 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735920 4806 flags.go:64] FLAG: --cluster-domain="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735924 4806 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735929 4806 flags.go:64] FLAG: --config-dir="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735933 4806 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735937 4806 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735947 4806 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735951 4806 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735956 4806 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735960 4806 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735964 4806 flags.go:64] FLAG: --contention-profiling="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735968 4806 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735972 4806 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735977 4806 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735981 4806 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735986 4806 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735991 4806 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.735998 4806 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736004 4806 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736010 4806 flags.go:64] FLAG: --enable-server="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736017 4806 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736029 4806 flags.go:64] FLAG: --event-burst="100" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736042 4806 flags.go:64] FLAG: --event-qps="50" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736046 4806 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736051 4806 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736057 4806 flags.go:64] FLAG: --eviction-hard="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736064 4806 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736070 4806 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736075 4806 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736081 4806 flags.go:64] FLAG: --eviction-soft="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736086 4806 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736091 4806 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736096 4806 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736102 4806 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736107 4806 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736112 4806 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736117 4806 flags.go:64] FLAG: --feature-gates="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736123 4806 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736128 4806 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736133 4806 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736139 4806 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736144 4806 flags.go:64] FLAG: --healthz-port="10248" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736149 4806 flags.go:64] FLAG: --help="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736155 4806 flags.go:64] FLAG: --hostname-override="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736162 4806 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736167 4806 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736172 4806 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736177 4806 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736182 4806 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736187 4806 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736191 4806 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736196 4806 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736201 4806 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736206 4806 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736210 4806 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736215 4806 flags.go:64] FLAG: --kube-reserved="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736220 4806 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736224 4806 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736237 4806 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736242 4806 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736247 4806 flags.go:64] FLAG: --lock-file="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736251 4806 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736256 4806 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736261 4806 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736269 4806 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736273 4806 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736278 4806 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736282 4806 flags.go:64] FLAG: --logging-format="text" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736287 4806 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736292 4806 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736297 4806 flags.go:64] FLAG: --manifest-url="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736302 4806 flags.go:64] FLAG: --manifest-url-header="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736325 4806 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736331 4806 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736337 4806 flags.go:64] FLAG: --max-pods="110" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736342 4806 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736348 4806 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736353 4806 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736359 4806 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736364 4806 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736369 4806 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736374 4806 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736387 4806 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736392 4806 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736397 4806 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736402 4806 flags.go:64] FLAG: --pod-cidr="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736407 4806 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736416 4806 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736421 4806 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736427 4806 flags.go:64] FLAG: --pods-per-core="0" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736433 4806 flags.go:64] FLAG: --port="10250" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736438 4806 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736443 4806 flags.go:64] FLAG: --provider-id="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736448 4806 flags.go:64] FLAG: --qos-reserved="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736463 4806 flags.go:64] FLAG: --read-only-port="10255" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736468 4806 flags.go:64] FLAG: --register-node="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736474 4806 flags.go:64] FLAG: --register-schedulable="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736479 4806 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736489 4806 flags.go:64] FLAG: --registry-burst="10" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736494 4806 flags.go:64] FLAG: --registry-qps="5" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736499 4806 flags.go:64] FLAG: --reserved-cpus="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736504 4806 flags.go:64] FLAG: --reserved-memory="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736512 4806 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736518 4806 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736523 4806 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736529 4806 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736533 4806 flags.go:64] FLAG: --runonce="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736539 4806 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736544 4806 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736551 4806 flags.go:64] FLAG: --seccomp-default="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736557 4806 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736562 4806 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736567 4806 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736572 4806 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736577 4806 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736582 4806 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736587 4806 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736591 4806 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736596 4806 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736601 4806 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736605 4806 flags.go:64] FLAG: --system-cgroups="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736610 4806 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736618 4806 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736623 4806 flags.go:64] FLAG: --tls-cert-file="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736627 4806 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736639 4806 flags.go:64] FLAG: --tls-min-version="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736644 4806 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736649 4806 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736654 4806 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736659 4806 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736671 4806 flags.go:64] FLAG: --v="2" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736680 4806 flags.go:64] FLAG: --version="false" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736687 4806 flags.go:64] FLAG: --vmodule="" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736721 4806 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.736728 4806 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736880 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736890 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736895 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736899 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736906 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736911 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736916 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736921 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736926 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736930 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736936 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736942 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736947 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736952 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736957 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736961 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736966 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736971 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736977 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736981 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736987 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736993 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.736998 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737003 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737008 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737012 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737017 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737022 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737028 4806 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737034 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737040 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737054 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737060 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737065 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737070 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737075 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737080 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737085 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737090 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737094 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737099 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737103 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737108 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737113 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737119 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737125 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737129 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737133 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737138 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737142 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737146 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737151 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737156 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737161 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737166 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737171 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737176 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737181 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737185 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737190 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737195 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737200 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737204 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737209 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737214 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737218 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737223 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737239 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737244 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737249 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.737254 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.737268 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.748056 4806 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.748132 4806 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748229 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748240 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748247 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748252 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748258 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748264 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748269 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748276 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748281 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748286 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748295 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748303 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748346 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748352 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748359 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748370 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748376 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748382 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748387 4806 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748393 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748398 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748405 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748411 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748417 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748422 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748427 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748432 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748438 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748443 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748448 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748453 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748459 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748465 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748470 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748477 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748482 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748488 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748493 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748499 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748504 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748509 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748514 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748521 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748530 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748538 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748546 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748552 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748559 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748567 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748573 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748580 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748587 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748594 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748602 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748609 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748616 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748623 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748630 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748637 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748642 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748648 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748653 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748659 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748665 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748672 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748678 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748684 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748689 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748695 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748733 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748741 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.748752 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748925 4806 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748937 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748944 4806 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748952 4806 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748961 4806 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748967 4806 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748973 4806 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748979 4806 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748985 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748992 4806 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.748997 4806 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749002 4806 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749007 4806 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749014 4806 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749019 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749026 4806 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749033 4806 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749039 4806 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749045 4806 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749050 4806 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749055 4806 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749060 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749066 4806 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749071 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749077 4806 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749082 4806 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749087 4806 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749092 4806 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749098 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749103 4806 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749108 4806 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749114 4806 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749119 4806 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749124 4806 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749130 4806 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749135 4806 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749140 4806 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749145 4806 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749151 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749156 4806 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749161 4806 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749166 4806 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749172 4806 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749178 4806 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749183 4806 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749188 4806 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749193 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749198 4806 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749203 4806 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749208 4806 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749213 4806 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749218 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749223 4806 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749229 4806 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749235 4806 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749242 4806 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749250 4806 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749257 4806 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749262 4806 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749268 4806 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749273 4806 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749279 4806 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749284 4806 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749290 4806 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749295 4806 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749300 4806 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749305 4806 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749311 4806 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749534 4806 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749543 4806 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.749551 4806 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.749563 4806 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.749867 4806 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.762698 4806 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.762908 4806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.764925 4806 server.go:997] "Starting client certificate rotation" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.764973 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.766164 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-19 15:46:32.344646094 +0000 UTC Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.766294 4806 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 576h53m44.578354981s for next certificate rotation Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.809842 4806 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.811974 4806 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.840539 4806 log.go:25] "Validated CRI v1 runtime API" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.898328 4806 log.go:25] "Validated CRI v1 image API" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.902093 4806 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.916579 4806 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-14-47-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.916637 4806 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.953497 4806 manager.go:217] Machine: {Timestamp:2025-11-25 14:52:47.935805352 +0000 UTC m=+0.587947803 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c9d50cce-a734-4456-ad77-ec687d096f9d BootID:e0f8c346-5c5c-4b9e-86cb-75f930f1dadc Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:fa:4b:1b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:fa:4b:1b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e2:d4:0b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:21:b3:3b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4c:e0:55 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:71:f5:24 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:06:75:c8:55:43:19 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:8e:ca:01:e3:d0:80 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.953937 4806 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.954261 4806 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.955948 4806 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.956158 4806 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.956198 4806 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.956476 4806 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.956489 4806 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.957076 4806 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.957113 4806 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.958085 4806 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.958194 4806 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.965217 4806 kubelet.go:418] "Attempting to sync node with API server" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.965248 4806 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.965280 4806 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.965296 4806 kubelet.go:324] "Adding apiserver pod source" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.965326 4806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.976685 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:47 crc kubenswrapper[4806]: W1125 14:52:47.976689 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:47 crc kubenswrapper[4806]: E1125 14:52:47.976851 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:47 crc kubenswrapper[4806]: E1125 14:52:47.976879 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.979465 4806 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.981083 4806 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 14:52:47 crc kubenswrapper[4806]: I1125 14:52:47.986368 4806 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008299 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008561 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008616 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008686 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008746 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008797 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008881 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.008979 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.009036 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.009087 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.009163 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.009221 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.010372 4806 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.010976 4806 server.go:1280] "Started kubelet" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.011102 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.012788 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.012847 4806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.012350 4806 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.012367 4806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013298 4806 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013325 4806 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013373 4806 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.013381 4806 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013259 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 02:02:28.532441846 +0000 UTC Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013426 4806 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 947h9m40.519019478s for next certificate rotation Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.013924 4806 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.014370 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.014482 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.017649 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018291 4806 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018355 4806 factory.go:55] Registering systemd factory Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018365 4806 factory.go:221] Registration of the systemd container factory successfully Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018925 4806 factory.go:153] Registering CRI-O factory Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018950 4806 factory.go:221] Registration of the crio container factory successfully Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.018983 4806 factory.go:103] Registering Raw factory Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.019011 4806 manager.go:1196] Started watching for new ooms in manager Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.019893 4806 manager.go:319] Starting recovery of all containers Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023026 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023080 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023095 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023107 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023115 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023123 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023132 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023171 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023181 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023190 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023212 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023257 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023266 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023277 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023362 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023390 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023399 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023408 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023449 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023476 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023485 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023493 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023503 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023531 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023562 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023570 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023598 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023649 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023659 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023667 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023682 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023708 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023735 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023744 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023757 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023766 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023774 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023783 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023791 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023832 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023843 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023851 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023862 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023869 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023878 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023887 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.023933 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.024077 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.024189 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.024201 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.024394 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.024462 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.026165 4806 server.go:460] "Adding debug handlers to kubelet server" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.039799 4806 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.039947 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040014 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040047 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040066 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040085 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040103 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040120 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040132 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040149 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040161 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040173 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040188 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040199 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040216 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040229 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040244 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040267 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040285 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040301 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040335 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040350 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040369 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040388 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040403 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040430 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040445 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040467 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040482 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040497 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040515 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040530 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040554 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040568 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040581 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040612 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040628 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040648 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040662 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040676 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040702 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040714 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040733 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040747 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040759 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040776 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040788 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040806 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040818 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040864 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040891 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040909 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040925 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040964 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.040992 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041017 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041035 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041053 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041070 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041084 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041103 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041129 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041160 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041185 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041201 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041216 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041259 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041273 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041285 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041336 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041369 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041398 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041413 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041434 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041457 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041476 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041500 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041550 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041569 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041589 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041605 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041625 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041642 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041659 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041678 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041692 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041709 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041724 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041737 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041754 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041768 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041783 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041795 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041812 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041828 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041841 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041856 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041871 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041884 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041901 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041915 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041927 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041945 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041965 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041985 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.041996 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042008 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042022 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042034 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042055 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042068 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042079 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042096 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042108 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042124 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042144 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042156 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042175 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042186 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042198 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042212 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042229 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042258 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042274 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042306 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042350 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042405 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042423 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042438 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042451 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042466 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042477 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042497 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042510 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042530 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042549 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042563 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042599 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042622 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042640 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042658 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042673 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042694 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042708 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042720 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042738 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042750 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042768 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042780 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042793 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042810 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042822 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042835 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042849 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042862 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042882 4806 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042894 4806 reconstruct.go:97] "Volume reconstruction finished" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.042904 4806 reconciler.go:26] "Reconciler: start to sync state" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.044396 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b479133d290dd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:52:48.010948829 +0000 UTC m=+0.663091240,LastTimestamp:2025-11-25 14:52:48.010948829 +0000 UTC m=+0.663091240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.054744 4806 manager.go:324] Recovery completed Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.064615 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.066245 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.066379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.066461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.067644 4806 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.067669 4806 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.067691 4806 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.085586 4806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.087928 4806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.087973 4806 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.088014 4806 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.088058 4806 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.088618 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.088695 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.102881 4806 policy_none.go:49] "None policy: Start" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.104158 4806 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.104196 4806 state_mem.go:35] "Initializing new in-memory state store" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.114151 4806 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.182204 4806 manager.go:334] "Starting Device Plugin manager" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.182277 4806 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.182292 4806 server.go:79] "Starting device plugin registration server" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.182869 4806 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.182890 4806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.183169 4806 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.183262 4806 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.183303 4806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.189996 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.190127 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.191553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.191722 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.191831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.192113 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.192305 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.192365 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.193395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.193442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.193456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.193768 4806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.194607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.194633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.194645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.195788 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.195994 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.196433 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198491 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198771 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.198952 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.199000 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200329 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200849 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.200920 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.201095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.201129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.201141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.201353 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.201382 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.202549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.219507 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245707 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245738 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245760 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245821 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245854 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245877 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245898 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245917 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.245945 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.246007 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.246070 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.246096 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.246118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.246142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.283871 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.284840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.284892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.284904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.284953 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.285625 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347229 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347288 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347322 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347342 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347363 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347378 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347424 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347429 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347439 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347486 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347501 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347479 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347500 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347528 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347532 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347521 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347544 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347687 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347769 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347801 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347831 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347847 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347871 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347712 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347903 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.347911 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.348054 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.486119 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.487901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.487953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.487964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.487995 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.488566 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.536281 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.544014 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.556090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.572685 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.578937 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.616707 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dd350605d50567797a3fb028a4291b12de423d9d887468442bf0eaf8339666d6 WatchSource:0}: Error finding container dd350605d50567797a3fb028a4291b12de423d9d887468442bf0eaf8339666d6: Status 404 returned error can't find the container with id dd350605d50567797a3fb028a4291b12de423d9d887468442bf0eaf8339666d6 Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.618123 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a WatchSource:0}: Error finding container d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a: Status 404 returned error can't find the container with id d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.620898 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.624655 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-149417e15ced5dc867f47ba188ee882cb6be1352905312ca9f625b8fd64719d0 WatchSource:0}: Error finding container 149417e15ced5dc867f47ba188ee882cb6be1352905312ca9f625b8fd64719d0: Status 404 returned error can't find the container with id 149417e15ced5dc867f47ba188ee882cb6be1352905312ca9f625b8fd64719d0 Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.627378 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bf41e82b5cafc9494315c55a394a8150567e487daf96f4dc8e708ba1c84bc10b WatchSource:0}: Error finding container bf41e82b5cafc9494315c55a394a8150567e487daf96f4dc8e708ba1c84bc10b: Status 404 returned error can't find the container with id bf41e82b5cafc9494315c55a394a8150567e487daf96f4dc8e708ba1c84bc10b Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.628842 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-75eb2a00c306957ae0ac3c744f979111679a74f756222a6d5fc78cf2e5e46348 WatchSource:0}: Error finding container 75eb2a00c306957ae0ac3c744f979111679a74f756222a6d5fc78cf2e5e46348: Status 404 returned error can't find the container with id 75eb2a00c306957ae0ac3c744f979111679a74f756222a6d5fc78cf2e5e46348 Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.888874 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.889826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.889850 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.889858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:48 crc kubenswrapper[4806]: I1125 14:52:48.889878 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.890236 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.913963 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.914036 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.927248 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.927344 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:48 crc kubenswrapper[4806]: W1125 14:52:48.949243 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:48 crc kubenswrapper[4806]: E1125 14:52:48.949298 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.012571 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.092583 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd350605d50567797a3fb028a4291b12de423d9d887468442bf0eaf8339666d6"} Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.093543 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"75eb2a00c306957ae0ac3c744f979111679a74f756222a6d5fc78cf2e5e46348"} Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.094806 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bf41e82b5cafc9494315c55a394a8150567e487daf96f4dc8e708ba1c84bc10b"} Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.097767 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"149417e15ced5dc867f47ba188ee882cb6be1352905312ca9f625b8fd64719d0"} Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.099114 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a"} Nov 25 14:52:49 crc kubenswrapper[4806]: E1125 14:52:49.422279 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Nov 25 14:52:49 crc kubenswrapper[4806]: W1125 14:52:49.575681 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:49 crc kubenswrapper[4806]: E1125 14:52:49.575771 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.691193 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.693329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.693378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.693391 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:49 crc kubenswrapper[4806]: I1125 14:52:49.693419 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:49 crc kubenswrapper[4806]: E1125 14:52:49.693922 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.012429 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.104088 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.104143 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.105456 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202" exitCode=0 Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.105557 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.105575 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.106294 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.106342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.106361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.107183 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="42786026a931ea7a973df2aefff759ce04499a69c7d4c7436ada1bf6a5c714b9" exitCode=0 Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.107373 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.107410 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"42786026a931ea7a973df2aefff759ce04499a69c7d4c7436ada1bf6a5c714b9"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.107893 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.108993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.109814 4806 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b" exitCode=0 Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.109853 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.109895 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.110851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.110878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.110888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.113176 4806 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5" exitCode=0 Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.113217 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5"} Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.113299 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.114117 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.114134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:50 crc kubenswrapper[4806]: I1125 14:52:50.114143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:50 crc kubenswrapper[4806]: W1125 14:52:50.728038 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:50 crc kubenswrapper[4806]: E1125 14:52:50.728250 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.012923 4806 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:51 crc kubenswrapper[4806]: E1125 14:52:51.024452 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.119409 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.119468 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.119594 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.120453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.120498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.120508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.122953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.122996 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.123009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.123019 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.125015 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cdd68927e45ba82bee23ce96be5f0c0093d313710e406148ebb4755ce9ec1bcc" exitCode=0 Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.125113 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cdd68927e45ba82bee23ce96be5f0c0093d313710e406148ebb4755ce9ec1bcc"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.125195 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.126077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.126102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.126111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.127759 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.127810 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.127820 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.127901 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.128765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.128789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.128798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.130577 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2"} Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.130675 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.131568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.131606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.131615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:51 crc kubenswrapper[4806]: W1125 14:52:51.167142 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:51 crc kubenswrapper[4806]: E1125 14:52:51.167234 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.200743 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.294040 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.295453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.295495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.295504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.295530 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:51 crc kubenswrapper[4806]: E1125 14:52:51.296087 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.234:6443: connect: connection refused" node="crc" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.363220 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:51 crc kubenswrapper[4806]: I1125 14:52:51.375640 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:51 crc kubenswrapper[4806]: W1125 14:52:51.728796 4806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.234:6443: connect: connection refused Nov 25 14:52:51 crc kubenswrapper[4806]: E1125 14:52:51.728871 4806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.234:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.133707 4806 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1705fefbe8122e10507afcf1389c91032695579d9ede2aa4ae50f00807cb1eca" exitCode=0 Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.133760 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1705fefbe8122e10507afcf1389c91032695579d9ede2aa4ae50f00807cb1eca"} Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.133873 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.134709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.134735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.134745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.138783 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.138823 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226"} Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.139250 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.139705 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.139876 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.139711 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141853 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.141973 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.142803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:52 crc kubenswrapper[4806]: I1125 14:52:52.579516 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144403 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"04135545cf0fa5c64bef656a4945deaf5bed6405b64a69bc658489da4b47cf52"} Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144470 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a9e2c951b933b2a390010f2acd690c1784e090f4b81b4d8beb0b88d243123776"} Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144488 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4762b87c51132b134e282d5e1d3a6667995ced68dda62718fec7c82b609d5384"} Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144497 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144540 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144609 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144645 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.144503 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ac553f30f4f8dd7c60374bbaa1c15e8a86e9a71697d1f177afe447834fca62b"} Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145383 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ade3ba711e7cfc39c2ee955a1f7e3c961f7e3000473ad5c77b62937085668291"} Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145448 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145979 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.145980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.146135 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:53 crc kubenswrapper[4806]: I1125 14:52:53.248736 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.114506 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.145802 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.145894 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.145954 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.147678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.148370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.148415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.148435 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.496872 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.498081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.498122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.498135 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:54 crc kubenswrapper[4806]: I1125 14:52:54.498158 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.125704 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.147793 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.147828 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.151091 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.151128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.151139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.151975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.152007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:55 crc kubenswrapper[4806]: I1125 14:52:55.152025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.150443 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.151206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.151250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.151259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.238510 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.238687 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.239699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.239762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:56 crc kubenswrapper[4806]: I1125 14:52:56.239772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:52:58 crc kubenswrapper[4806]: E1125 14:52:58.194204 4806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:52:59 crc kubenswrapper[4806]: I1125 14:52:59.910912 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:52:59 crc kubenswrapper[4806]: I1125 14:52:59.911053 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:52:59 crc kubenswrapper[4806]: I1125 14:52:59.912454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:52:59 crc kubenswrapper[4806]: I1125 14:52:59.912496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:52:59 crc kubenswrapper[4806]: I1125 14:52:59.912533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:00 crc kubenswrapper[4806]: I1125 14:53:00.114199 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:53:00 crc kubenswrapper[4806]: I1125 14:53:00.159968 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:53:00 crc kubenswrapper[4806]: I1125 14:53:00.160773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:00 crc kubenswrapper[4806]: I1125 14:53:00.160800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:00 crc kubenswrapper[4806]: I1125 14:53:00.160808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:01 crc kubenswrapper[4806]: I1125 14:53:01.772987 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:53:01 crc kubenswrapper[4806]: I1125 14:53:01.773063 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:53:01 crc kubenswrapper[4806]: I1125 14:53:01.776718 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:53:01 crc kubenswrapper[4806]: I1125 14:53:01.776787 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.114359 4806 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.114705 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.297680 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.297821 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.298904 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.298978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.298989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:03 crc kubenswrapper[4806]: I1125 14:53:03.309156 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 14:53:04 crc kubenswrapper[4806]: I1125 14:53:04.169009 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:53:04 crc kubenswrapper[4806]: I1125 14:53:04.169962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:04 crc kubenswrapper[4806]: I1125 14:53:04.170007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:04 crc kubenswrapper[4806]: I1125 14:53:04.170020 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.244822 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.244963 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.247368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.247418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.247442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.253461 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:53:06 crc kubenswrapper[4806]: E1125 14:53:06.757358 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.760114 4806 trace.go:236] Trace[607887687]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:52:51.998) (total time: 14761ms): Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[607887687]: ---"Objects listed" error: 14761ms (14:53:06.760) Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[607887687]: [14.761228807s] [14.761228807s] END Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.760151 4806 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.760259 4806 trace.go:236] Trace[1533175593]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:52:56.087) (total time: 10672ms): Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[1533175593]: ---"Objects listed" error: 10672ms (14:53:06.760) Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[1533175593]: [10.672568815s] [10.672568815s] END Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.760284 4806 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 14:53:06 crc kubenswrapper[4806]: E1125 14:53:06.761869 4806 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.762020 4806 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.762155 4806 trace.go:236] Trace[850310340]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:52:56.148) (total time: 10613ms): Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[850310340]: ---"Objects listed" error: 10613ms (14:53:06.762) Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[850310340]: [10.613317035s] [10.613317035s] END Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.762173 4806 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.762453 4806 trace.go:236] Trace[773706683]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:52:55.892) (total time: 10869ms): Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[773706683]: ---"Objects listed" error: 10869ms (14:53:06.762) Nov 25 14:53:06 crc kubenswrapper[4806]: Trace[773706683]: [10.869929333s] [10.869929333s] END Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.762471 4806 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784352 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51878->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784406 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51878->192.168.126.11:17697: read: connection reset by peer" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784352 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51892->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784535 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51892->192.168.126.11:17697: read: connection reset by peer" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784745 4806 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.784814 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.977125 4806 apiserver.go:52] "Watching apiserver" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980033 4806 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980330 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980669 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980743 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980776 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:06 crc kubenswrapper[4806]: E1125 14:53:06.980897 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:06 crc kubenswrapper[4806]: E1125 14:53:06.980937 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.980964 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.981034 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.981895 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:06 crc kubenswrapper[4806]: E1125 14:53:06.981998 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.982445 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.982752 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.984223 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.984642 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.984739 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.985777 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.985897 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.985907 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 14:53:06 crc kubenswrapper[4806]: I1125 14:53:06.986498 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.005677 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.014269 4806 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.017943 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.028123 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.037515 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.047013 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.055903 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064066 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064108 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064127 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064146 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064164 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064182 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064220 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064238 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064276 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064293 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064324 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064359 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064407 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064429 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064443 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064461 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064479 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064506 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064526 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064542 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064557 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064576 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064612 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064627 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064621 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064645 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064644 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064727 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064750 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064769 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064754 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064794 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064816 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064836 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064864 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064889 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064906 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064922 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064916 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064941 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064958 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064978 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064994 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065004 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065013 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065022 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065013 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065039 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065063 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065087 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065105 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065123 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065127 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065160 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065178 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065198 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065217 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065221 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065235 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065252 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.064621 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065269 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065287 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065303 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065336 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065339 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065358 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065376 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065379 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065396 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065399 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065414 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065422 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065435 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065455 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065472 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065489 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065481 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065504 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065522 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065541 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065558 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065563 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065577 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065595 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065611 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065628 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065662 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065677 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065680 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065694 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065710 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065709 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065726 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065726 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065734 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065743 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065761 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065777 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065792 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065812 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065828 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065842 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065850 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065860 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065858 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065877 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065903 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065914 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065918 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065921 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065939 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065958 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065977 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.065999 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066020 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066043 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066066 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066083 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066103 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066118 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066135 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066152 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066168 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066185 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066202 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066220 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066235 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066251 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066266 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066282 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066297 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066328 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066345 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066360 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066377 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066401 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066425 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066445 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066462 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066483 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066501 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066520 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066541 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066564 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066584 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066606 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066625 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066646 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066667 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066756 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066773 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066832 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066858 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066882 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066900 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066916 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066934 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066951 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066968 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066985 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067001 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067018 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067036 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067054 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067070 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067087 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067104 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067121 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067139 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067155 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067173 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067191 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067208 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067223 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067240 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067257 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067291 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067308 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067345 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067365 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067384 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067404 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067430 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067471 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067492 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067509 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067525 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067547 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067568 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067595 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067611 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067638 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067656 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067685 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067724 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067742 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067759 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067774 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067795 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067815 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067833 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067854 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067876 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067897 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067915 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067932 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067947 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067963 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067979 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067999 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068020 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068040 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068058 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068076 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068098 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068119 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068192 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068214 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068253 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068283 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068301 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069152 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069182 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069206 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069244 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069263 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069281 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069361 4806 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069374 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069386 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069396 4806 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069405 4806 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069415 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069425 4806 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069434 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069445 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069459 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069474 4806 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069489 4806 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069500 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069511 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069522 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069531 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069542 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069551 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069561 4806 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069571 4806 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069580 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069590 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069602 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069611 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069621 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069630 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069639 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070746 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066045 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071110 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066128 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066257 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066306 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066353 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066352 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071185 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066454 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066980 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067007 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067056 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067088 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067223 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067379 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067397 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067456 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067461 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067574 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067610 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067973 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068089 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068118 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068181 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067636 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067848 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068389 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.067644 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.068857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.069790 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070011 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070222 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070283 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070710 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070915 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.070966 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071105 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.066904 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071205 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071408 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071437 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071217 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071223 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071269 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071484 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071296 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071833 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071853 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.071917 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072015 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072172 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072225 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072281 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072419 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072538 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072770 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.072921 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073002 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073121 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073225 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073304 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073342 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073410 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073476 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073533 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073559 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073599 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073673 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.073831 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.074036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.074286 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.074394 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.074615 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.074912 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.074928 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.075001 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:07.574969862 +0000 UTC m=+20.227112393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.075067 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.075084 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.077745 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.077775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.077833 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.077917 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078116 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078403 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078497 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078521 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078536 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078596 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078652 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078659 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078718 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078757 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078930 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078923 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079062 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079096 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079135 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079149 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079206 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079393 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079434 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079654 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.079688 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080024 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080409 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080433 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080614 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080955 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.080978 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.080962 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.081177 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:07.581087763 +0000 UTC m=+20.233230184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081230 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081308 4806 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081394 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081452 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081725 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081773 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081794 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.081892 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082021 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082062 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082278 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082441 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082466 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.082755 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.083974 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.084096 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.084108 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.083590 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.084648 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085250 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085140 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085562 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085642 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085570 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.085966 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.086023 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.086035 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.086263 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.086390 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.078747 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.086892 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.087031 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.087067 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:07.587043369 +0000 UTC m=+20.239185860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.087177 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.087187 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.087565 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.088131 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.088717 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.090784 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.091677 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.092891 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.093387 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.093544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.093914 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.094264 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.095752 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.095778 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.095792 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.095858 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:07.595841224 +0000 UTC m=+20.247983635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.096387 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.097042 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.097167 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.097597 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.098842 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099186 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099261 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099455 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099481 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099553 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.099717 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.100436 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.100457 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.100468 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.100502 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:07.600492163 +0000 UTC m=+20.252634574 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.100649 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.100721 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.104058 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.104183 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.104856 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.110792 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.121675 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.124666 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.130855 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170262 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170344 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170367 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170419 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170433 4806 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170442 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170452 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170462 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170474 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170486 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170499 4806 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170510 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170520 4806 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170532 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170543 4806 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170553 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170561 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170569 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170577 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170586 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170596 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170605 4806 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170614 4806 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170609 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170625 4806 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170700 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170719 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170732 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170744 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170755 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170767 4806 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170779 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170791 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170803 4806 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170843 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170858 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170869 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170879 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170889 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170901 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170912 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170923 4806 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170935 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170947 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170960 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170971 4806 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.170983 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171043 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171059 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171071 4806 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171083 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171095 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171108 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171121 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171133 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171148 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171161 4806 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171174 4806 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171186 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171197 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171209 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171220 4806 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171234 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171245 4806 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171258 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171270 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171281 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171293 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171304 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171335 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171350 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171362 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171375 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171388 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171399 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171410 4806 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171436 4806 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171447 4806 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171461 4806 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171473 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171487 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171499 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171512 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171524 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171535 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171547 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171558 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171569 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171584 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171595 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171608 4806 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171619 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171629 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171639 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171650 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171660 4806 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171670 4806 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171681 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171692 4806 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171703 4806 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171714 4806 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171725 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171736 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171747 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171759 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171771 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171781 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171792 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171803 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171814 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171826 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171837 4806 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171851 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171915 4806 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171928 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171940 4806 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171950 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171962 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171975 4806 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171986 4806 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.171997 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172015 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172026 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172038 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172051 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172063 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172073 4806 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172085 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172097 4806 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172111 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172125 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172138 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172149 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172161 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172173 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172184 4806 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172196 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172208 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172219 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172232 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172244 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172257 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172270 4806 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172282 4806 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172296 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172308 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172338 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172349 4806 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172360 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172372 4806 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172385 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172398 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172410 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172421 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172433 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172445 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172456 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172469 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172481 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172531 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172543 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172560 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172598 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172614 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172626 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172638 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172649 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172685 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172699 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172710 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.172722 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.178166 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.180387 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226" exitCode=255 Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.180439 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226"} Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.190714 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.191019 4806 scope.go:117] "RemoveContainer" containerID="f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.191420 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.203534 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.215654 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.226553 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.236307 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.250854 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.294084 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.301139 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.307478 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:53:07 crc kubenswrapper[4806]: W1125 14:53:07.342296 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c85afa7dc9f4be1df0c55f3da3bf5b258794144df8c2889c45201f9292b714ba WatchSource:0}: Error finding container c85afa7dc9f4be1df0c55f3da3bf5b258794144df8c2889c45201f9292b714ba: Status 404 returned error can't find the container with id c85afa7dc9f4be1df0c55f3da3bf5b258794144df8c2889c45201f9292b714ba Nov 25 14:53:07 crc kubenswrapper[4806]: W1125 14:53:07.342990 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1f7c48195b72cb5f77708af39eab448b9f4386614410e23089584324f14db28e WatchSource:0}: Error finding container 1f7c48195b72cb5f77708af39eab448b9f4386614410e23089584324f14db28e: Status 404 returned error can't find the container with id 1f7c48195b72cb5f77708af39eab448b9f4386614410e23089584324f14db28e Nov 25 14:53:07 crc kubenswrapper[4806]: W1125 14:53:07.344204 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-42911274012d272422b8acfe30e3a16b1e44ae9dace25352651a7a8f96a354a6 WatchSource:0}: Error finding container 42911274012d272422b8acfe30e3a16b1e44ae9dace25352651a7a8f96a354a6: Status 404 returned error can't find the container with id 42911274012d272422b8acfe30e3a16b1e44ae9dace25352651a7a8f96a354a6 Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.576408 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.576548 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.576829 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:08.576808622 +0000 UTC m=+21.228951033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.677756 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.677858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.677886 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:07 crc kubenswrapper[4806]: I1125 14:53:07.677915 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.677991 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:08.6779706 +0000 UTC m=+21.330113011 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678026 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678059 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:08.678052982 +0000 UTC m=+21.330195393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678068 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678081 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678091 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678118 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:08.678109434 +0000 UTC m=+21.330251845 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678028 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678135 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678141 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:07 crc kubenswrapper[4806]: E1125 14:53:07.678159 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:08.678153425 +0000 UTC m=+21.330295836 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.088871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.089024 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.092990 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.093547 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.094371 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.094964 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.095652 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.096189 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.096826 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.097462 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.098092 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.098610 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.099089 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.099752 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.100204 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.100726 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.101211 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.101724 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.104925 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.105355 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.105879 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.106793 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.107244 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.107765 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.108527 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.109139 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.109961 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.110543 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.111556 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.112159 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.113011 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.113126 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.113542 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.113966 4806 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.114060 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.115916 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.116546 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.117372 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.118733 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.119345 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.120137 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.120871 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.121856 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.122301 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.123342 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.123944 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.124411 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.124895 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.125347 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.126216 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.126695 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.127764 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.128216 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.129149 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.129646 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.130220 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.131243 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.131691 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.138470 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.151013 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.165651 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.178938 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.184127 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.184373 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c85afa7dc9f4be1df0c55f3da3bf5b258794144df8c2889c45201f9292b714ba"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.185803 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.188079 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.188362 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.190006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.190066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.190084 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1f7c48195b72cb5f77708af39eab448b9f4386614410e23089584324f14db28e"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.195581 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"42911274012d272422b8acfe30e3a16b1e44ae9dace25352651a7a8f96a354a6"} Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.216444 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.229642 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.241598 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.258265 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.273881 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.289022 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.301304 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.313781 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.542539 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5lhpk"] Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.543128 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.547049 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.547728 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.547980 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.577411 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.584170 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.584331 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.584402 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:10.584387249 +0000 UTC m=+23.236529660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.601045 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.625844 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.653908 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.677748 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685333 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685428 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685458 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685493 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfx48\" (UniqueName: \"kubernetes.io/projected/57550f59-b31f-43c1-adca-565f246d4083-kube-api-access-tfx48\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57550f59-b31f-43c1-adca-565f246d4083-hosts-file\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.685554 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.685629 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.685689 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:10.685670851 +0000 UTC m=+23.337813262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686055 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:10.686043781 +0000 UTC m=+23.338186192 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686149 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686170 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686183 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686214 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:10.686205986 +0000 UTC m=+23.338348407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686265 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686278 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686287 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:08 crc kubenswrapper[4806]: E1125 14:53:08.686329 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:10.686304538 +0000 UTC m=+23.338446949 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.696685 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.726822 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.749754 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.785927 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfx48\" (UniqueName: \"kubernetes.io/projected/57550f59-b31f-43c1-adca-565f246d4083-kube-api-access-tfx48\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.785992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57550f59-b31f-43c1-adca-565f246d4083-hosts-file\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.786085 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/57550f59-b31f-43c1-adca-565f246d4083-hosts-file\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.807188 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfx48\" (UniqueName: \"kubernetes.io/projected/57550f59-b31f-43c1-adca-565f246d4083-kube-api-access-tfx48\") pod \"node-resolver-5lhpk\" (UID: \"57550f59-b31f-43c1-adca-565f246d4083\") " pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: I1125 14:53:08.857590 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5lhpk" Nov 25 14:53:08 crc kubenswrapper[4806]: W1125 14:53:08.870019 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57550f59_b31f_43c1_adca_565f246d4083.slice/crio-eaa4ca1006b00dbeff82e699c2e1376ecb002d72d33aa067d47ca5ddba7e74f7 WatchSource:0}: Error finding container eaa4ca1006b00dbeff82e699c2e1376ecb002d72d33aa067d47ca5ddba7e74f7: Status 404 returned error can't find the container with id eaa4ca1006b00dbeff82e699c2e1376ecb002d72d33aa067d47ca5ddba7e74f7 Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.088528 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.088579 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:09 crc kubenswrapper[4806]: E1125 14:53:09.088661 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:09 crc kubenswrapper[4806]: E1125 14:53:09.088729 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.199711 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5lhpk" event={"ID":"57550f59-b31f-43c1-adca-565f246d4083","Type":"ContainerStarted","Data":"9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f"} Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.199767 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5lhpk" event={"ID":"57550f59-b31f-43c1-adca-565f246d4083","Type":"ContainerStarted","Data":"eaa4ca1006b00dbeff82e699c2e1376ecb002d72d33aa067d47ca5ddba7e74f7"} Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.226562 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.248265 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.267459 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.291418 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.309773 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.328620 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.352985 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.365512 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.553441 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-mwdqt"] Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.553744 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zt8m9"] Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.553811 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.555607 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.559707 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 14:53:09 crc kubenswrapper[4806]: W1125 14:53:09.559744 4806 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 14:53:09 crc kubenswrapper[4806]: E1125 14:53:09.559794 4806 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.560089 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 14:53:09 crc kubenswrapper[4806]: W1125 14:53:09.560165 4806 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 14:53:09 crc kubenswrapper[4806]: E1125 14:53:09.560925 4806 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561111 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kclf8"] Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561283 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561357 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561448 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561496 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wls"] Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.561745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.563651 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.563677 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.564054 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.564149 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.564160 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.567956 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569537 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569712 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569736 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569778 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569897 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.569960 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.570060 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.576136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.592125 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.619665 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.654737 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.677264 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693701 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-os-release\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693723 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693738 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693756 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39baff20-1e9a-48b1-8872-155c5ad5931d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693775 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-system-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693791 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-socket-dir-parent\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693809 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693866 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cni-binary-copy\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693890 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5vqr\" (UniqueName: \"kubernetes.io/projected/39baff20-1e9a-48b1-8872-155c5ad5931d-kube-api-access-j5vqr\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693908 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693924 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693940 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-hostroot\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693956 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-system-cni-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693971 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693987 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.693993 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694063 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-cnibin\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694117 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-binary-copy\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694151 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-netns\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694207 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-daemon-config\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694234 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-os-release\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694262 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlt7n\" (UniqueName: \"kubernetes.io/projected/228a80dc-3be5-4125-9d07-c8eb262a0eda-kube-api-access-jlt7n\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694287 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-k8s-cni-cncf-io\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694330 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694402 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694488 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cnibin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-bin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694532 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-multus\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694570 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-kubelet\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694590 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694656 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-conf-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694745 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694769 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39baff20-1e9a-48b1-8872-155c5ad5931d-proxy-tls\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694785 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-etc-kubernetes\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694821 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694842 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694860 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694879 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694899 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694942 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694963 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/39baff20-1e9a-48b1-8872-155c5ad5931d-rootfs\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694981 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-multus-certs\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.694998 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbntn\" (UniqueName: \"kubernetes.io/projected/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-kube-api-access-dbntn\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.695019 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lvm\" (UniqueName: \"kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.706338 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.717129 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.736484 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.749967 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.764138 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.779091 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.791776 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795594 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-conf-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795703 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795723 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39baff20-1e9a-48b1-8872-155c5ad5931d-proxy-tls\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795740 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-etc-kubernetes\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795807 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795823 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-conf-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795861 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795879 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-etc-kubernetes\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.795966 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796057 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796120 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796154 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796208 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796297 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbntn\" (UniqueName: \"kubernetes.io/projected/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-kube-api-access-dbntn\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796264 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796299 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796355 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9lvm\" (UniqueName: \"kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796478 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/39baff20-1e9a-48b1-8872-155c5ad5931d-rootfs\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/39baff20-1e9a-48b1-8872-155c5ad5931d-rootfs\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796514 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-multus-certs\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796520 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796535 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796551 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-os-release\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796551 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-multus-certs\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796568 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796605 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39baff20-1e9a-48b1-8872-155c5ad5931d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796621 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-system-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796637 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-socket-dir-parent\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796694 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796704 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cni-binary-copy\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796750 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5vqr\" (UniqueName: \"kubernetes.io/projected/39baff20-1e9a-48b1-8872-155c5ad5931d-kube-api-access-j5vqr\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-system-cni-dir\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796764 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796806 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796814 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-os-release\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.796802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-system-cni-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797763 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797800 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797836 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-hostroot\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-cnibin\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797926 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-binary-copy\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797949 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-netns\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.797977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-daemon-config\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.798683 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cni-binary-copy\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.799173 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.799423 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39baff20-1e9a-48b1-8872-155c5ad5931d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800079 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800121 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-socket-dir-parent\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800150 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-system-cni-dir\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800247 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800270 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-cnibin\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800340 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-hostroot\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800349 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-os-release\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800392 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlt7n\" (UniqueName: \"kubernetes.io/projected/228a80dc-3be5-4125-9d07-c8eb262a0eda-kube-api-access-jlt7n\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800434 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-netns\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800470 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-k8s-cni-cncf-io\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800513 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-run-k8s-cni-cncf-io\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800526 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800552 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800579 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/228a80dc-3be5-4125-9d07-c8eb262a0eda-os-release\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-multus\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800645 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-multus\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cnibin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-bin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800689 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-kubelet\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800746 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-cnibin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800829 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800850 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39baff20-1e9a-48b1-8872-155c5ad5931d-proxy-tls\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800887 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-binary-copy\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800939 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-kubelet\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800943 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-host-var-lib-cni-bin\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.800990 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.801195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-multus-daemon-config\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.804128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.810436 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.813192 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbntn\" (UniqueName: \"kubernetes.io/projected/8b7ddd20-62b7-4687-9982-83cf1cbac3ab-kube-api-access-dbntn\") pod \"multus-mwdqt\" (UID: \"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\") " pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.814942 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9lvm\" (UniqueName: \"kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm\") pod \"ovnkube-node-69wls\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.818037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlt7n\" (UniqueName: \"kubernetes.io/projected/228a80dc-3be5-4125-9d07-c8eb262a0eda-kube-api-access-jlt7n\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.819780 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5vqr\" (UniqueName: \"kubernetes.io/projected/39baff20-1e9a-48b1-8872-155c5ad5931d-kube-api-access-j5vqr\") pod \"machine-config-daemon-kclf8\" (UID: \"39baff20-1e9a-48b1-8872-155c5ad5931d\") " pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.824130 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.839143 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.850413 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.863418 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.871093 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-mwdqt" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.879166 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.889080 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.896529 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.900800 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:09 crc kubenswrapper[4806]: W1125 14:53:09.915123 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fff40d8_fd9f_49da_953f_89894b4ef3a1.slice/crio-7567d270b7844c179392c17fcd71a87791e5604bbb7ea656294cc4e6dcc3d82a WatchSource:0}: Error finding container 7567d270b7844c179392c17fcd71a87791e5604bbb7ea656294cc4e6dcc3d82a: Status 404 returned error can't find the container with id 7567d270b7844c179392c17fcd71a87791e5604bbb7ea656294cc4e6dcc3d82a Nov 25 14:53:09 crc kubenswrapper[4806]: I1125 14:53:09.918048 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.089087 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.089242 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.121686 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.128485 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.149816 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.150526 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.175225 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.192230 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.206537 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerStarted","Data":"a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.206605 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerStarted","Data":"07fff81259a7fd295d72baa6ceb87d9a4dfa6f4f3cf58b7f995443932e7b6191"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.207679 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" exitCode=0 Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.207740 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.207762 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"7567d270b7844c179392c17fcd71a87791e5604bbb7ea656294cc4e6dcc3d82a"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.209044 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.214546 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.214613 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.214629 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"ef2f0adcf63244457ec5dbbac2bc3f53e13be73ddaf67b2117d4cc3eccec7aaa"} Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.230523 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.249596 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.261131 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.277428 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.293546 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.312136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.326367 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.338880 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.352943 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.366715 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.377420 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.391554 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.406107 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.426431 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.444376 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.445040 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.446333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/228a80dc-3be5-4125-9d07-c8eb262a0eda-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zt8m9\" (UID: \"228a80dc-3be5-4125-9d07-c8eb262a0eda\") " pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.457847 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.470035 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.480708 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.495510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.508673 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.522693 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.542108 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:10Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.606516 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.606668 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.606735 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:14.606714673 +0000 UTC m=+27.258857084 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.707310 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.707475 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.707510 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707553 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:14.707514321 +0000 UTC m=+27.359656732 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707615 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707677 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707692 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.707707 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707748 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:14.707728797 +0000 UTC m=+27.359871268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707752 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707794 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707809 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707840 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707889 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:14.707862091 +0000 UTC m=+27.360004562 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:10 crc kubenswrapper[4806]: E1125 14:53:10.707916 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:14.707906072 +0000 UTC m=+27.360048723 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.930200 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 14:53:10 crc kubenswrapper[4806]: I1125 14:53:10.930651 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" Nov 25 14:53:10 crc kubenswrapper[4806]: W1125 14:53:10.942735 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228a80dc_3be5_4125_9d07_c8eb262a0eda.slice/crio-3012f4014da634e2bcb3f6a7b53148255106fd1a27f6c0fa91f435ac0777d000 WatchSource:0}: Error finding container 3012f4014da634e2bcb3f6a7b53148255106fd1a27f6c0fa91f435ac0777d000: Status 404 returned error can't find the container with id 3012f4014da634e2bcb3f6a7b53148255106fd1a27f6c0fa91f435ac0777d000 Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.089180 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.089228 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:11 crc kubenswrapper[4806]: E1125 14:53:11.089329 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:11 crc kubenswrapper[4806]: E1125 14:53:11.089417 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.221977 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.222037 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.222054 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.222064 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.222075 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.222085 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.223067 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerStarted","Data":"3012f4014da634e2bcb3f6a7b53148255106fd1a27f6c0fa91f435ac0777d000"} Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.795451 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-6jcq2"] Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.796227 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.797848 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.798086 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.798110 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.798348 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.816639 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.828646 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.842152 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.852605 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.865473 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.876187 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.886770 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.897568 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.908997 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.919492 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttnk\" (UniqueName: \"kubernetes.io/projected/9137647d-1ca0-49be-b482-8d04428e5325-kube-api-access-dttnk\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.919550 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9137647d-1ca0-49be-b482-8d04428e5325-serviceca\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.919580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9137647d-1ca0-49be-b482-8d04428e5325-host\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.927432 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.941497 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.953556 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.965974 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:11 crc kubenswrapper[4806]: I1125 14:53:11.979524 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:11Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.021057 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttnk\" (UniqueName: \"kubernetes.io/projected/9137647d-1ca0-49be-b482-8d04428e5325-kube-api-access-dttnk\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.021121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9137647d-1ca0-49be-b482-8d04428e5325-serviceca\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.021153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9137647d-1ca0-49be-b482-8d04428e5325-host\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.021204 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9137647d-1ca0-49be-b482-8d04428e5325-host\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.022526 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9137647d-1ca0-49be-b482-8d04428e5325-serviceca\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.038435 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttnk\" (UniqueName: \"kubernetes.io/projected/9137647d-1ca0-49be-b482-8d04428e5325-kube-api-access-dttnk\") pod \"node-ca-6jcq2\" (UID: \"9137647d-1ca0-49be-b482-8d04428e5325\") " pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.089342 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:12 crc kubenswrapper[4806]: E1125 14:53:12.089747 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.109646 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-6jcq2" Nov 25 14:53:12 crc kubenswrapper[4806]: W1125 14:53:12.129969 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9137647d_1ca0_49be_b482_8d04428e5325.slice/crio-0a009fe4c33c294ad6a227c625be6cb37eb5e0289564cbfcec1131a88dbf8af5 WatchSource:0}: Error finding container 0a009fe4c33c294ad6a227c625be6cb37eb5e0289564cbfcec1131a88dbf8af5: Status 404 returned error can't find the container with id 0a009fe4c33c294ad6a227c625be6cb37eb5e0289564cbfcec1131a88dbf8af5 Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.226958 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6jcq2" event={"ID":"9137647d-1ca0-49be-b482-8d04428e5325","Type":"ContainerStarted","Data":"0a009fe4c33c294ad6a227c625be6cb37eb5e0289564cbfcec1131a88dbf8af5"} Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.228431 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911" exitCode=0 Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.228465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911"} Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.243101 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.262687 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.277777 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.293825 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.309453 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.326306 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.338679 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.349673 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.358980 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.371878 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.385786 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.397289 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.414410 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:12 crc kubenswrapper[4806]: I1125 14:53:12.429445 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.088418 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.088464 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.088883 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.089033 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.162507 4806 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.164421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.164461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.164472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.164595 4806 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.171702 4806 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.171956 4806 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.173110 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.173143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.173152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.173167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.173176 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.189544 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.192987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.193037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.193048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.193066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.193079 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.207399 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.210745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.210780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.210789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.210811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.210822 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.223144 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.226935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.226967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.226977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.226998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.227007 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.233114 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd" exitCode=0 Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.233234 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.234992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-6jcq2" event={"ID":"9137647d-1ca0-49be-b482-8d04428e5325","Type":"ContainerStarted","Data":"ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d"} Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.244149 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.248294 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.248349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.248360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.248380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.248391 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.251517 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.260277 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: E1125 14:53:13.260403 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.263159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.263204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.263214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.263236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.263247 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.267680 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.282035 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.299100 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.312177 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.326622 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.343028 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.357933 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.365835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.365883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.365891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.365905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.365915 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.375102 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.385710 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.398170 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.409273 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.420900 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.431811 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.445347 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.457603 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.467826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.467870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.467881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.467899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.467909 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.477192 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.490648 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.503292 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.513856 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.525601 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.538167 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.552705 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.564762 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.570203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.570276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.570289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.570306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.570352 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.575442 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.589624 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.598774 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.609464 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.672538 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.672572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.672590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.672608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.672618 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.774605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.774643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.774654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.774672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.774683 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.876826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.876864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.876872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.876890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.876899 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.979009 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.979060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.979069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.979084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:13 crc kubenswrapper[4806]: I1125 14:53:13.979092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:13Z","lastTransitionTime":"2025-11-25T14:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.081104 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.081148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.081158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.081175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.081187 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.088612 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.088777 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.183409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.183471 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.183488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.183510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.183521 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.240230 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011" exitCode=0 Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.240418 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.245352 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.256549 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.271432 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.285690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.285738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.285749 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.285767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.285779 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.286339 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.299154 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.312182 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.325566 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.339963 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.353787 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.369107 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.379989 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.388131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.388170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.388180 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.388196 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.388205 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.395835 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.408367 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.421460 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.439071 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:14Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.490867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.490923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.490937 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.490956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.490975 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.593719 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.593775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.593787 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.593806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.593818 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.645287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.645487 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.645597 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:22.64557252 +0000 UTC m=+35.297715021 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.696173 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.696230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.696243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.696259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.696270 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.746088 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.746223 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746280 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:22.746249164 +0000 UTC m=+35.398391575 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746306 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.746345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746379 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:22.746366898 +0000 UTC m=+35.398509309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.746399 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746471 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746491 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746492 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746505 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746523 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746535 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746544 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:22.746534112 +0000 UTC m=+35.398676523 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:14 crc kubenswrapper[4806]: E1125 14:53:14.746564 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:22.746555023 +0000 UTC m=+35.398697434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.799747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.799809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.799818 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.799832 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.799841 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.902559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.902611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.902628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.902649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:14 crc kubenswrapper[4806]: I1125 14:53:14.902661 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:14Z","lastTransitionTime":"2025-11-25T14:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.004774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.004810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.004820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.004835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.004844 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.088332 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.088340 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:15 crc kubenswrapper[4806]: E1125 14:53:15.088461 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:15 crc kubenswrapper[4806]: E1125 14:53:15.088538 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.107152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.107194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.107203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.107219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.107229 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.209613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.209684 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.209711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.209735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.209752 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.250549 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6" exitCode=0 Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.250602 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.265277 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.277392 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.293398 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.307357 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.312023 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.312059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.312067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.312079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.312089 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.321577 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.331298 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.342486 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.356455 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.368963 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.379363 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.397890 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.412047 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.414704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.415130 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.415147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.415167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.415179 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.425912 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.443822 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.517541 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.517585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.517594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.517611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.517619 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.620168 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.620209 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.620219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.620234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.620244 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.722714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.722753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.722762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.722776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.722785 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.825487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.825531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.825545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.825562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.825576 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.927890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.927928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.927939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.927956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:15 crc kubenswrapper[4806]: I1125 14:53:15.927967 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:15Z","lastTransitionTime":"2025-11-25T14:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.029855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.029882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.029889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.029902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.029910 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.088658 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:16 crc kubenswrapper[4806]: E1125 14:53:16.088868 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.132124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.132170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.132185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.132202 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.132213 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.234480 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.234547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.234559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.234587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.234599 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.258854 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.259128 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.263605 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerStarted","Data":"7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.275183 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.283401 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.289955 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.303818 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.323196 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.335942 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.337046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.337072 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.337081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.337094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.337104 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.347837 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.360383 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.371842 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.388987 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.401523 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.413741 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.426548 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.439542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.439599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.439610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.439628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.439638 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.443587 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.454210 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.464837 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.476576 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.487502 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.497598 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.513144 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.524704 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.535215 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.541867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.541896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.541905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.541920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.541929 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.546563 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.558270 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.569779 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.581745 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.596960 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.608106 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.619387 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.644517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.644558 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.644569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.644587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.644599 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.746264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.746303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.746333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.746359 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.746372 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.849572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.849618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.849628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.849643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.849654 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.952624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.952694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.952705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.952726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:16 crc kubenswrapper[4806]: I1125 14:53:16.952742 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:16Z","lastTransitionTime":"2025-11-25T14:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.056139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.056206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.056221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.056244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.056258 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.089204 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:17 crc kubenswrapper[4806]: E1125 14:53:17.089369 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.089371 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:17 crc kubenswrapper[4806]: E1125 14:53:17.089475 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.158908 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.158954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.158963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.158980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.158992 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.262988 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.263043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.263058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.263084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.263098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.272692 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81" exitCode=0 Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.272772 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.272845 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.273357 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.291059 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.300205 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.307767 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.325288 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.340958 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.357047 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.366857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.366913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.366924 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.366947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.366972 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.370514 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.391398 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.407746 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.423794 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.436987 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.454628 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.470605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.470648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.470658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.470676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.470690 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.472357 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.490962 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.505546 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.521992 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.535157 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.552626 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.564401 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.573283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.573328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.573337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.573353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.573363 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.579518 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.595637 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.611107 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.632101 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.646497 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.661824 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.671486 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.675450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.675508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.675517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.675538 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.675550 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.686784 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.698778 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.712495 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.778041 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.778092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.778102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.778118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.778127 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.880623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.880678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.880690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.880709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.880720 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.983673 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.983742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.983752 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.983773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:17 crc kubenswrapper[4806]: I1125 14:53:17.983785 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:17Z","lastTransitionTime":"2025-11-25T14:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.086019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.086055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.086063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.086078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.086087 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.089251 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:18 crc kubenswrapper[4806]: E1125 14:53:18.089456 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.103415 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.112847 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.126249 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.137977 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.150439 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.166589 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.180584 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.189995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.190090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.190112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.190144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.190181 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.195438 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.210687 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.224688 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.236797 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.250192 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.263692 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.275399 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.278708 4806 generic.go:334] "Generic (PLEG): container finished" podID="228a80dc-3be5-4125-9d07-c8eb262a0eda" containerID="687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061" exitCode=0 Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.278841 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.279409 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerDied","Data":"687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.292197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.292244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.292256 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.292276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.292288 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.294156 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.308441 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.321816 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.333494 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.351280 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.363530 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.375290 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.393048 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.394503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.394543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.394553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.394569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.394578 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.416524 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.465105 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.481685 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.495150 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.497130 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.497179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.497222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.497249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.497268 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.508904 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.519053 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.601789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.601859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.601872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.601891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.601902 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.731690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.731731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.731742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.731760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.731780 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.834147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.834192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.834204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.834221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.834233 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.936890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.937001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.937015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.937041 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:18 crc kubenswrapper[4806]: I1125 14:53:18.937055 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:18Z","lastTransitionTime":"2025-11-25T14:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.039438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.039492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.039505 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.039526 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.039540 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.089004 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.089107 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:19 crc kubenswrapper[4806]: E1125 14:53:19.089191 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:19 crc kubenswrapper[4806]: E1125 14:53:19.089366 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.142005 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.142085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.142112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.142143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.142165 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.245083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.245125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.245137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.245194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.245210 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.286838 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.286830 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" event={"ID":"228a80dc-3be5-4125-9d07-c8eb262a0eda","Type":"ContainerStarted","Data":"d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.302198 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.315987 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.333596 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.346057 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.347301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.347361 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.347370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.347385 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.347393 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.362707 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.373567 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.385264 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.397993 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.407745 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.421157 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.438652 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.450561 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.450592 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.450602 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.450617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.450626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.451198 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.470792 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.484332 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:19Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.552448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.552495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.552507 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.552523 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.552534 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.655420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.655454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.655462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.655475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.655485 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.757472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.757524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.757533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.757550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.757568 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.860379 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.860419 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.860429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.860475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.860483 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.962426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.962465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.962475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.962492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:19 crc kubenswrapper[4806]: I1125 14:53:19.962503 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:19Z","lastTransitionTime":"2025-11-25T14:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.065726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.065789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.065806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.065834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.065856 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.088963 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:20 crc kubenswrapper[4806]: E1125 14:53:20.089093 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.167991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.168035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.168045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.168066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.168078 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.269993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.270093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.270107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.270124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.270136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.292114 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/0.log" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.294876 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd" exitCode=1 Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.294944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.295786 4806 scope.go:117] "RemoveContainer" containerID="425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.310036 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.322332 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.334406 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.345820 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.358636 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.372884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.372930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.372942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.372974 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.372992 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.374989 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.387539 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.397337 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.410982 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.424574 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.440743 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.461469 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.473908 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.475215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.475258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.475269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.475285 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.475295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.485682 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:20Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.577623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.577672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.577684 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.577700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.577711 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.679687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.679729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.679738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.679753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.679763 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.781473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.781520 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.781533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.781549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.781560 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.884230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.884278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.884287 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.884301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.884328 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.986249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.986294 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.986306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.986338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:20 crc kubenswrapper[4806]: I1125 14:53:20.986349 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:20Z","lastTransitionTime":"2025-11-25T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088227 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088286 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:21 crc kubenswrapper[4806]: E1125 14:53:21.088384 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:21 crc kubenswrapper[4806]: E1125 14:53:21.088507 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088922 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.088941 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.190915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.190960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.190976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.190997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.191012 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.293258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.293302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.293336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.293357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.293368 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.299840 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/1.log" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.300592 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/0.log" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.303427 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd" exitCode=1 Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.303487 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.303524 4806 scope.go:117] "RemoveContainer" containerID="425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.304291 4806 scope.go:117] "RemoveContainer" containerID="3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd" Nov 25 14:53:21 crc kubenswrapper[4806]: E1125 14:53:21.304474 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.317202 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.327057 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.339162 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.350970 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.367384 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.376729 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.387414 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.396019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.396048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.396056 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.396069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.396078 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.399904 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.410926 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.421471 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.438557 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.451881 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.464184 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.483391 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.497806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.497846 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.497857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.497875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.497886 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.599840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.599880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.599890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.599907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.599917 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.610478 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.624357 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.637754 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.649507 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.669651 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.684574 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.697530 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.701636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.701680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.701689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.701705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.701715 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.711164 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.723875 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.739443 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.750568 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.764185 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.776166 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.788577 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.799977 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:21Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.803514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.803542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.803552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.803565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.803576 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.905840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.905910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.905921 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.905939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:21 crc kubenswrapper[4806]: I1125 14:53:21.905950 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:21Z","lastTransitionTime":"2025-11-25T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.008108 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.008154 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.008162 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.008177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.008185 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.089187 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.089383 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.110796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.110844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.110857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.110874 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.110889 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.213175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.213222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.213232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.213247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.213257 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.308300 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/1.log" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.315514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.315556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.315566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.315583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.315592 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.415971 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk"] Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.416425 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418127 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418267 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.418349 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.421670 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.432155 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.441850 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.455541 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.461822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhrx4\" (UniqueName: \"kubernetes.io/projected/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-kube-api-access-fhrx4\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.461950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.461982 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.462063 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.469629 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.483856 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.502800 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.518269 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.521458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.521528 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.521544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.521570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.521591 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.532818 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.544429 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.555267 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.562574 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhrx4\" (UniqueName: \"kubernetes.io/projected/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-kube-api-access-fhrx4\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.562638 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.562666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.562688 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.563477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.563498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.568223 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.569547 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.578778 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhrx4\" (UniqueName: \"kubernetes.io/projected/5a29a188-9022-41a4-8f1f-4a3274ffe3f9-kube-api-access-fhrx4\") pod \"ovnkube-control-plane-749d76644c-2mmdk\" (UID: \"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.584956 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.599611 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.614587 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.623640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.623675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.623686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.623703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.623717 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.626485 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.663210 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.663425 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.663507 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.663489228 +0000 UTC m=+51.315631639 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.726387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.726441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.726450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.726469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.726481 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.731645 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" Nov 25 14:53:22 crc kubenswrapper[4806]: W1125 14:53:22.743214 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a29a188_9022_41a4_8f1f_4a3274ffe3f9.slice/crio-487b9c50d79d601b6adaee55b482fefd821b38fad3152e8e9b61ee1075ca516d WatchSource:0}: Error finding container 487b9c50d79d601b6adaee55b482fefd821b38fad3152e8e9b61ee1075ca516d: Status 404 returned error can't find the container with id 487b9c50d79d601b6adaee55b482fefd821b38fad3152e8e9b61ee1075ca516d Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.764055 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764170 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.764143432 +0000 UTC m=+51.416285843 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.764206 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.764260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.764291 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764422 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764437 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764456 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764456 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764468 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764474 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764487 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764469 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.764459521 +0000 UTC m=+51.416601922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764514 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.764506572 +0000 UTC m=+51.416648983 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:22 crc kubenswrapper[4806]: E1125 14:53:22.764528 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.764523162 +0000 UTC m=+51.416665573 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.829048 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.829096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.829112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.829129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.829140 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.931534 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.931575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.931583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.931596 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:22 crc kubenswrapper[4806]: I1125 14:53:22.931605 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:22Z","lastTransitionTime":"2025-11-25T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.033698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.033734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.033743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.033756 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.033764 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.088811 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.089254 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.088938 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.089554 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.136712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.136767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.136775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.136791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.136810 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.146587 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lsrxh"] Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.147067 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.147139 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.160127 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.167158 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz9hg\" (UniqueName: \"kubernetes.io/projected/49e22ad0-2903-4ed0-94ad-40d713f99c9f-kube-api-access-cz9hg\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.167217 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.170763 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.181790 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.191303 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.202500 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.215111 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.228819 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238741 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.238865 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.250968 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.261648 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.268432 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz9hg\" (UniqueName: \"kubernetes.io/projected/49e22ad0-2903-4ed0-94ad-40d713f99c9f-kube-api-access-cz9hg\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.268463 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.268582 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.268632 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:23.768618525 +0000 UTC m=+36.420760936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.274820 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.287007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz9hg\" (UniqueName: \"kubernetes.io/projected/49e22ad0-2903-4ed0-94ad-40d713f99c9f-kube-api-access-cz9hg\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.287786 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.299234 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.310532 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.314512 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" event={"ID":"5a29a188-9022-41a4-8f1f-4a3274ffe3f9","Type":"ContainerStarted","Data":"487b9c50d79d601b6adaee55b482fefd821b38fad3152e8e9b61ee1075ca516d"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.321543 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.337102 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.341482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.341531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.341544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.341562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.341573 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.443899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.443951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.443966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.443984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.443996 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.546329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.546372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.546383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.546400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.546410 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.575802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.575834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.575844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.575857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.575865 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.587805 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.591119 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.591159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.591167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.591181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.591189 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.602605 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.605936 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.605984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.605992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.606011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.606022 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.616977 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.620284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.620355 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.620367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.620386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.620395 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.631619 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.635429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.635472 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.635481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.635497 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.635507 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.651012 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.651135 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.652526 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.652562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.652573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.652589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.652603 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.754932 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.754984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.754998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.755020 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.755036 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.773911 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.774060 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:23 crc kubenswrapper[4806]: E1125 14:53:23.774143 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:24.774124926 +0000 UTC m=+37.426267327 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.857267 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.857351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.857362 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.857377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.857386 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.959412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.959452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.959461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.959476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:23 crc kubenswrapper[4806]: I1125 14:53:23.959485 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:23Z","lastTransitionTime":"2025-11-25T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.061893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.061925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.061935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.061952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.061962 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.088205 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:24 crc kubenswrapper[4806]: E1125 14:53:24.088369 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.165066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.165127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.165140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.165160 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.165172 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.268453 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.268489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.268499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.268527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.268539 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.328680 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" event={"ID":"5a29a188-9022-41a4-8f1f-4a3274ffe3f9","Type":"ContainerStarted","Data":"8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.328775 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" event={"ID":"5a29a188-9022-41a4-8f1f-4a3274ffe3f9","Type":"ContainerStarted","Data":"82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.347009 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.362027 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.371677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.371723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.371732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.371747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.371758 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.374539 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.386439 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.400092 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.409749 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.423391 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.439144 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.450839 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.468274 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.474552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.474604 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.474616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.474636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.474652 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.483765 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.495513 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.507092 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.519980 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.535232 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.548762 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:24Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.576709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.576765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.576782 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.576803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.576820 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.679285 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.679383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.679403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.679429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.679447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782300 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:24 crc kubenswrapper[4806]: E1125 14:53:24.782511 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:24 crc kubenswrapper[4806]: E1125 14:53:24.782587 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:26.782564567 +0000 UTC m=+39.434707038 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782631 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.782660 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.885347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.885392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.885407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.885423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.885432 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.986959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.987010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.987021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.987036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:24 crc kubenswrapper[4806]: I1125 14:53:24.987075 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:24Z","lastTransitionTime":"2025-11-25T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.088208 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.088311 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.088350 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:25 crc kubenswrapper[4806]: E1125 14:53:25.088425 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:25 crc kubenswrapper[4806]: E1125 14:53:25.088514 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:25 crc kubenswrapper[4806]: E1125 14:53:25.088624 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.089422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.089448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.089457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.089469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.089479 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.192140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.192177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.192188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.192202 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.192211 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.294694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.294736 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.294745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.294761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.294771 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.397177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.397216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.397224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.397237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.397245 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.499449 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.499494 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.499503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.499519 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.499529 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.601792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.601836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.601844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.601859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.601870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.705259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.705335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.705351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.705375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.705390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.807682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.807735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.807746 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.807766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.807777 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.910017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.910062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.910070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.910086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:25 crc kubenswrapper[4806]: I1125 14:53:25.910095 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:25Z","lastTransitionTime":"2025-11-25T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.011791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.011836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.011845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.011860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.011871 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.088645 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:26 crc kubenswrapper[4806]: E1125 14:53:26.088877 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.114174 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.114250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.114276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.114307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.114357 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.216600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.216647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.216655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.216672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.216681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.319281 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.319351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.319363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.319380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.319393 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.421773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.421819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.421831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.421849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.421862 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.523745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.523793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.523813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.523830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.523843 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.625523 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.625561 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.625571 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.625584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.625594 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.727305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.727373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.727389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.727405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.727417 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.799582 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:26 crc kubenswrapper[4806]: E1125 14:53:26.799825 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:26 crc kubenswrapper[4806]: E1125 14:53:26.800101 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:30.800082697 +0000 UTC m=+43.452225108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.829951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.830036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.830054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.830082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.830102 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.933508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.933573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.933587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.933610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:26 crc kubenswrapper[4806]: I1125 14:53:26.933626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:26Z","lastTransitionTime":"2025-11-25T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.036101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.036377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.036461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.036537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.036626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.088606 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.088606 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:27 crc kubenswrapper[4806]: E1125 14:53:27.088765 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:27 crc kubenswrapper[4806]: E1125 14:53:27.088823 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.088609 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:27 crc kubenswrapper[4806]: E1125 14:53:27.088912 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.139072 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.139114 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.139122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.139136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.139145 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.240711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.240757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.240768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.240783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.240794 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.342825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.342864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.342875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.342893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.342905 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.444958 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.445033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.445049 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.445066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.445076 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.547232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.547275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.547286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.547302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.547333 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.649057 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.649119 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.649133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.649150 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.649161 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.750998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.751047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.751061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.751078 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.751092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.853193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.853258 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.853282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.853303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.853345 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.954999 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.955050 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.955065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.955088 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:27 crc kubenswrapper[4806]: I1125 14:53:27.955106 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:27Z","lastTransitionTime":"2025-11-25T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.057216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.057263 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.057273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.057289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.057299 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.088623 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:28 crc kubenswrapper[4806]: E1125 14:53:28.088756 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.104544 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.117517 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.128210 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.139878 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.153133 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.159741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.159781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.159790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.159803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.159813 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.162119 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.170444 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.182931 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.192181 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.206055 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.217303 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.228133 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.247836 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://425ada3e26983f58e99e9ab94e81d3a7e7701026652ece94e82e0e2119128bdd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"message\\\":\\\"y event handler 4\\\\nI1125 14:53:19.335254 6029 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:19.335281 6029 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:53:19.337244 6029 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.337636 6029 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:19.339136 6029 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:53:19.339158 6029 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:53:19.339183 6029 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:19.339193 6029 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:19.339404 6029 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1125 14:53:19.339420 6029 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1125 14:53:19.339434 6029 factory.go:656] Stopping watch factory\\\\nI1125 14:53:19.339438 6029 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:53:19.339449 6029 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.260210 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.261693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.261718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.261727 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.261742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.261751 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.272082 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.283429 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.363825 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.363867 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.363880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.363896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.363906 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.466677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.466733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.466743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.466761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.466773 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.570026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.570105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.570127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.570152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.570169 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.673041 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.673115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.673126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.673148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.673167 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.776175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.776230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.776243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.776265 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.776277 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.878931 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.878972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.878985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.879001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.879010 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.984791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.984826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.984835 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.984847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:28 crc kubenswrapper[4806]: I1125 14:53:28.984856 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:28Z","lastTransitionTime":"2025-11-25T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.087647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.087692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.087701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.087717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.087727 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.088966 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.089036 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.089064 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:29 crc kubenswrapper[4806]: E1125 14:53:29.089218 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:29 crc kubenswrapper[4806]: E1125 14:53:29.089339 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:29 crc kubenswrapper[4806]: E1125 14:53:29.089455 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.190968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.191002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.191010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.191026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.191036 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.294045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.294096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.294108 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.294127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.294139 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.396892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.396942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.396954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.396970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.396981 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.500654 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.500708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.500722 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.500744 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.500760 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.603660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.603699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.603708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.603723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.603735 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.706870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.706906 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.706915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.706929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.706938 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.809242 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.809297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.809331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.809350 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.809365 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.911506 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.911560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.911571 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.911592 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:29 crc kubenswrapper[4806]: I1125 14:53:29.911610 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:29Z","lastTransitionTime":"2025-11-25T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.014685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.014729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.014740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.014758 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.014771 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.089392 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:30 crc kubenswrapper[4806]: E1125 14:53:30.089596 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.118161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.118209 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.118221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.118239 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.118301 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.221116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.221170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.221182 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.221205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.221218 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.323593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.323661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.323672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.323694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.323713 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.426336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.426594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.426712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.426791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.426871 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.529193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.529245 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.529257 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.529275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.529288 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.632373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.632427 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.632439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.632464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.632479 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.736055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.736116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.736127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.736149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.736169 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.838514 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:30 crc kubenswrapper[4806]: E1125 14:53:30.838712 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:30 crc kubenswrapper[4806]: E1125 14:53:30.838796 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:38.83877832 +0000 UTC m=+51.490920731 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.839259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.839286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.839296 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.839329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.839338 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.941035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.941093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.941102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.941121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:30 crc kubenswrapper[4806]: I1125 14:53:30.941130 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:30Z","lastTransitionTime":"2025-11-25T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.043514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.043577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.043589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.043609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.043622 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.089053 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.089053 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:31 crc kubenswrapper[4806]: E1125 14:53:31.089208 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:31 crc kubenswrapper[4806]: E1125 14:53:31.089249 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.089060 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:31 crc kubenswrapper[4806]: E1125 14:53:31.089391 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.145682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.145909 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.146038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.146146 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.146248 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.249092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.249145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.249158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.249177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.249190 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.350829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.350872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.350884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.350900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.350912 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.453219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.453263 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.453273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.453289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.453302 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.555742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.556015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.556025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.556040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.556050 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.657828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.657873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.657889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.657905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.657915 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.761420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.761470 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.761482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.761499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.761509 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.864463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.864505 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.864515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.864530 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.864540 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.967130 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.967207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.967224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.967249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:31 crc kubenswrapper[4806]: I1125 14:53:31.967291 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:31Z","lastTransitionTime":"2025-11-25T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.069343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.069408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.069420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.069443 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.069454 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.088964 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:32 crc kubenswrapper[4806]: E1125 14:53:32.089166 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.172154 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.172207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.172218 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.172235 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.172246 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.275036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.275095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.275106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.275123 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.275138 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.377601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.377653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.377667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.377686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.377699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.480386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.480756 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.480892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.480986 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.481076 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.583827 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.583876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.583888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.583905 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.583916 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.685996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.686031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.686039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.686055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.686063 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.788500 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.788541 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.788552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.788570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.788580 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.891122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.891177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.891189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.891210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.891222 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.993871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.993933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.993944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.993963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:32 crc kubenswrapper[4806]: I1125 14:53:32.993974 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:32Z","lastTransitionTime":"2025-11-25T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.089067 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.089103 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.089064 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.089213 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.089266 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.089363 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.096140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.096184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.096195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.096213 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.096222 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.197800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.197849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.197862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.197881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.197893 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.300133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.300192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.300205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.300222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.300234 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.402265 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.402371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.402388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.402413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.402429 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.504519 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.504567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.504576 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.504591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.504600 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.606930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.606978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.606987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.607000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.607009 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.709713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.709762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.709774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.709794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.709805 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.812083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.812128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.812138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.812157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.812167 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.911646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.911685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.911694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.911708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.911717 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.923596 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.926983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.927014 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.927031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.927079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.927089 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.938362 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.941596 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.941735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.941828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.941926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.942010 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.955772 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.959207 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.959344 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.959439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.959539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.959632 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.971696 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.975345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.975378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.975390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.975404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.975414 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.987589 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:33 crc kubenswrapper[4806]: E1125 14:53:33.987871 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.989224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.989277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.989291 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.989336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:33 crc kubenswrapper[4806]: I1125 14:53:33.989347 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:33Z","lastTransitionTime":"2025-11-25T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.089238 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:34 crc kubenswrapper[4806]: E1125 14:53:34.089401 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.090917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.090970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.090981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.090995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.091004 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.193546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.193847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.193935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.194015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.194084 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.296993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.297045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.297060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.297079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.297091 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.399043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.399079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.399087 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.399100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.399109 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.501837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.502099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.502174 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.502247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.502329 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.604134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.604451 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.604549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.604624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.604682 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.707019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.707066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.707077 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.707092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.707104 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.808930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.808978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.808986 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.809002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.809012 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.911054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.911102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.911113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.911129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:34 crc kubenswrapper[4806]: I1125 14:53:34.911137 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:34Z","lastTransitionTime":"2025-11-25T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.012998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.013037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.013045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.013061 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.013070 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.088743 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.088770 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:35 crc kubenswrapper[4806]: E1125 14:53:35.088905 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.088980 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:35 crc kubenswrapper[4806]: E1125 14:53:35.089051 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:35 crc kubenswrapper[4806]: E1125 14:53:35.089143 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.089662 4806 scope.go:117] "RemoveContainer" containerID="3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.106026 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.115236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.115463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.115538 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.115610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.115902 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.118750 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.129886 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.146392 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.157485 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.170166 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.183101 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.195372 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.207747 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.220232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.220273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.220286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.220303 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.220545 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.221513 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.235066 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.247265 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.262753 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.278126 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.292036 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.316103 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.323720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.323775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.323787 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.323805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.323818 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.387762 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/1.log" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.390266 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.390416 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.407792 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.423487 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.426224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.426269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.426281 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.426299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.426343 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.440268 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.452451 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.463842 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.477072 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.489877 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.508833 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.525282 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.528842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.529028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.529122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.529236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.529330 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.542544 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.558172 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.570489 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.590273 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.608006 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.621098 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.632298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.632412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.632421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.632437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.632446 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.639188 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.734269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.734951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.735039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.735122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.735201 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.837771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.838018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.838160 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.838280 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.838423 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.941280 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.941340 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.941352 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.941386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:35 crc kubenswrapper[4806]: I1125 14:53:35.941400 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:35Z","lastTransitionTime":"2025-11-25T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.044479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.044524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.044551 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.044569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.044579 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.088856 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:36 crc kubenswrapper[4806]: E1125 14:53:36.089018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.146539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.146584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.146593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.146606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.146615 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.248587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.248649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.248659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.248676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.248686 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.351100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.351435 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.351502 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.351693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.351784 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.394692 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/2.log" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.395193 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/1.log" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.397752 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" exitCode=1 Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.397790 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.397823 4806 scope.go:117] "RemoveContainer" containerID="3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.398459 4806 scope.go:117] "RemoveContainer" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" Nov 25 14:53:36 crc kubenswrapper[4806]: E1125 14:53:36.398599 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.412699 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.425949 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.438642 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.454889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.454939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.454952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.454970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.454982 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.455382 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.474217 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.487661 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.500198 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.512370 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.521508 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.532424 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.542221 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.554028 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.562669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.562714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.562725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.562741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.562752 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.568444 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.579369 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.589637 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.600784 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.664855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.665143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.665239 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.665342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.665421 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.767581 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.767614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.767623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.767635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.767644 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.869511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.869551 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.869560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.869577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.869586 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.971980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.972017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.972028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.972043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:36 crc kubenswrapper[4806]: I1125 14:53:36.972052 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:36Z","lastTransitionTime":"2025-11-25T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.074938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.074984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.074995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.075013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.075025 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.088401 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.088449 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.088449 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:37 crc kubenswrapper[4806]: E1125 14:53:37.088519 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:37 crc kubenswrapper[4806]: E1125 14:53:37.088602 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:37 crc kubenswrapper[4806]: E1125 14:53:37.088715 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.177266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.177329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.177342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.177358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.177369 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.279051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.279089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.279101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.279159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.279172 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.381609 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.381651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.381661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.381676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.381686 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.402849 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/2.log" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.485251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.485290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.485299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.485327 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.485343 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.587076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.587115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.587125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.587138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.587146 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.689784 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.690003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.690098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.690168 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.690239 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.793008 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.793060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.793076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.793099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.793116 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.895879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.895958 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.895970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.896017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.896027 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.998748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.998807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.998816 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.998831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:37 crc kubenswrapper[4806]: I1125 14:53:37.998841 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:37Z","lastTransitionTime":"2025-11-25T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.088831 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.089004 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101334 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.101512 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.112688 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.124632 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.134893 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.148136 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.159470 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.169441 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.184583 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3e82f349fde59423ab15775184687ea285fb55bdecd6aa2ad7d6ce44289511dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"message\\\":\\\"r 6 for removal\\\\nI1125 14:53:21.008919 6238 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:53:21.008796 6238 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.008993 6238 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009024 6238 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:53:21.009041 6238 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:53:21.009012 6238 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1125 14:53:21.009194 6238 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1125 14:53:21.009261 6238 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009397 6238 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:53:21.009742 6238 factory.go:656] Stopping watch factory\\\\nI1125 14:53:21.009760 6238 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:53:21.009783 6238 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1125 14:53:21.009837 6238 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.202957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.203261 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.203283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.203302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.203374 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.206207 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.216720 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.229163 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.241294 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.250733 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.259434 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.270760 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.281217 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:38Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.305780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.305819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.305830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.305844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.305854 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.408358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.408411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.408421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.408438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.408447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.510817 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.510873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.510884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.510959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.510974 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.613660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.613705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.613716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.613733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.613743 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.716712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.717030 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.717135 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.717224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.717342 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.728606 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.728752 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.728809 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:10.72878996 +0000 UTC m=+83.380932371 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.820215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.820257 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.820268 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.820283 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.820294 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.829625 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.829722 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.829754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.829798 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.829910 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.829927 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.829937 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.829987 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:10.829972318 +0000 UTC m=+83.482114729 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830137 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830230 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:10.830214125 +0000 UTC m=+83.482356546 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830274 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:10.830264177 +0000 UTC m=+83.482406588 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830368 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830433 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830495 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.830579 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:10.830568165 +0000 UTC m=+83.482710576 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.922826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.922870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.922880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.922896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.922904 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:38Z","lastTransitionTime":"2025-11-25T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:38 crc kubenswrapper[4806]: I1125 14:53:38.930335 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.930571 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:38 crc kubenswrapper[4806]: E1125 14:53:38.930734 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:53:54.930709774 +0000 UTC m=+67.582852255 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.025411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.025457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.025469 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.025484 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.025497 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.089140 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.089184 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.089271 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:39 crc kubenswrapper[4806]: E1125 14:53:39.089272 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:39 crc kubenswrapper[4806]: E1125 14:53:39.089399 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:39 crc kubenswrapper[4806]: E1125 14:53:39.089622 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.128074 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.128149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.128161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.128177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.128191 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.180175 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.181194 4806 scope.go:117] "RemoveContainer" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" Nov 25 14:53:39 crc kubenswrapper[4806]: E1125 14:53:39.181395 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.194158 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.205487 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.216822 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.230519 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.230578 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.230588 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.230603 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.230612 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.234264 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.247948 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.259523 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.270650 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.281528 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.291278 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.302916 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.318717 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.328950 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.332374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.332546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.332753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.332865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.332948 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.342632 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.353699 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.366360 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.377723 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.434839 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.434890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.434898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.434912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.434921 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.536987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.537057 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.537080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.537109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.537135 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.639219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.639483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.639563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.639676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.639756 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.741922 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.741953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.741963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.741976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.741984 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.845063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.845116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.845126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.845141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.845152 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.947098 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.947144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.947155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.947169 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:39 crc kubenswrapper[4806]: I1125 14:53:39.947179 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:39Z","lastTransitionTime":"2025-11-25T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.049804 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.050066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.050144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.050325 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.050437 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.089370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:40 crc kubenswrapper[4806]: E1125 14:53:40.089546 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.152773 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.153038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.153049 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.153064 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.153073 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.254794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.254997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.255103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.255345 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.255526 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.358365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.358573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.358658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.358729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.358806 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.461074 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.461126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.461139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.461157 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.461170 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.564067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.564413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.564424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.564441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.564452 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.667015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.667058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.667067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.667083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.667098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.769196 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.769468 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.769646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.769813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.769984 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.871820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.872360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.872454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.872520 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.872575 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.974513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.974765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.974844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.974921 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:40 crc kubenswrapper[4806]: I1125 14:53:40.974988 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:40Z","lastTransitionTime":"2025-11-25T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.076913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.076966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.076976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.076993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.077005 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.089205 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.089254 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.089218 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:41 crc kubenswrapper[4806]: E1125 14:53:41.089373 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:41 crc kubenswrapper[4806]: E1125 14:53:41.089471 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:41 crc kubenswrapper[4806]: E1125 14:53:41.089564 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.178976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.179016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.179028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.179044 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.179055 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.205692 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.216556 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.220491 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.234348 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.247511 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.261772 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.277001 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.287158 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.287222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.287236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.287261 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.287280 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.296870 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.336661 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.354013 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.373969 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.388144 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.389045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.389082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.389092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.389106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.389118 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.402091 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.414396 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.429510 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.442111 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.453759 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.471619 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.491841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.491874 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.491885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.491903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.491915 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.594513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.594548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.594557 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.594572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.594580 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.696842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.696878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.696887 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.696899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.696908 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.800598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.800646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.800660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.800680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.800694 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.903648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.903708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.903729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.903748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:41 crc kubenswrapper[4806]: I1125 14:53:41.903762 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:41Z","lastTransitionTime":"2025-11-25T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.006392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.006437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.006447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.006464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.006475 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.089031 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:42 crc kubenswrapper[4806]: E1125 14:53:42.089258 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.109141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.109200 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.109215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.109232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.109243 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.211812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.211864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.211878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.211898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.211910 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.314911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.315396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.315499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.315576 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.315678 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.418008 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.418090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.418107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.418134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.418150 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.520648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.520703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.520717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.520732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.520745 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.623141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.623195 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.623205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.623222 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.623232 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.726510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.726585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.726601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.726633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.726671 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.829708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.829762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.829776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.829797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.829813 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.932661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.932713 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.932723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.932738 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:42 crc kubenswrapper[4806]: I1125 14:53:42.932750 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:42Z","lastTransitionTime":"2025-11-25T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.035292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.035347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.035358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.035373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.035384 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.089439 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.089449 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.089494 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:43 crc kubenswrapper[4806]: E1125 14:53:43.089742 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:43 crc kubenswrapper[4806]: E1125 14:53:43.089831 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:43 crc kubenswrapper[4806]: E1125 14:53:43.089974 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.138678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.138714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.138723 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.138737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.138748 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.241203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.241286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.241299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.241363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.241374 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.343529 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.343615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.343632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.343651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.343663 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.446305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.446380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.446392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.446410 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.446421 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.549331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.549639 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.549736 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.549830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.549919 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.652649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.653017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.653260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.653503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.653705 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.756551 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.756602 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.756652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.756674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.756699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.859831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.859871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.859880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.859896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.859906 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.963002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.963065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.963076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.963092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:43 crc kubenswrapper[4806]: I1125 14:53:43.963102 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:43Z","lastTransitionTime":"2025-11-25T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.066152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.066409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.066474 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.066543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.066605 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.084483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.084549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.084562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.084601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.084615 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.088755 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.088947 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.098920 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.103140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.103205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.103286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.103349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.103373 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.116617 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.120716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.120749 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.120760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.120775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.120790 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.139522 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.143237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.143273 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.143284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.143300 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.143326 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.156522 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.161570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.161613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.161623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.161661 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.161673 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.173097 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:44Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:44 crc kubenswrapper[4806]: E1125 14:53:44.173215 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.174733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.174760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.174774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.174794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.174806 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.277649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.277689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.277700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.277716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.277728 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.380461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.380504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.380515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.380529 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.380539 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.482805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.483091 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.483159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.483230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.483284 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.585621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.585667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.585680 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.585697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.585711 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.687948 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.688203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.688269 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.688369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.688459 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.790282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.790542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.790635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.790697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.790754 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.893519 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.893565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.893575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.893587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.893596 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.996287 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.996937 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.997037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.997165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:44 crc kubenswrapper[4806]: I1125 14:53:44.997344 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:44Z","lastTransitionTime":"2025-11-25T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.089046 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.089065 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:45 crc kubenswrapper[4806]: E1125 14:53:45.089588 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:45 crc kubenswrapper[4806]: E1125 14:53:45.089460 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.089165 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:45 crc kubenswrapper[4806]: E1125 14:53:45.089730 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.099928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.099979 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.100040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.100063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.100078 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.202653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.202686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.202694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.202707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.202715 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.305282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.305349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.305364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.305380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.305396 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.407797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.407833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.407843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.407858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.407868 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.509880 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.510129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.510240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.510306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.510411 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.613503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.613794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.613898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.614120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.614213 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.716755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.716794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.716879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.716896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.716905 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.819058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.819408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.819479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.819540 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.819616 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.923387 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.923776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.923956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.924051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:45 crc kubenswrapper[4806]: I1125 14:53:45.924125 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:45Z","lastTransitionTime":"2025-11-25T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.027855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.027929 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.027950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.027980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.028004 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.088668 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:46 crc kubenswrapper[4806]: E1125 14:53:46.088833 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.130786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.130826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.130836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.130853 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.130865 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.233615 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.233670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.233693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.233722 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.233744 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.335885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.335925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.335935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.335952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.335962 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.438341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.438374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.438382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.438395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.438404 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.540531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.540747 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.540833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.540943 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.541003 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.642857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.643170 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.643259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.643358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.643454 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.746237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.746277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.746286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.746301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.746325 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.848819 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.848848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.848855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.848868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.848878 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.951243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.951542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.951628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.951708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:46 crc kubenswrapper[4806]: I1125 14:53:46.951792 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:46Z","lastTransitionTime":"2025-11-25T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.054776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.054832 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.054847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.054866 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.054876 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.088407 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.088523 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:47 crc kubenswrapper[4806]: E1125 14:53:47.088575 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.088653 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:47 crc kubenswrapper[4806]: E1125 14:53:47.088672 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:47 crc kubenswrapper[4806]: E1125 14:53:47.088839 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.158476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.158547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.158567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.158595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.158614 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.261250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.261284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.261300 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.261331 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.261344 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.363457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.363735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.363829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.363916 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.364004 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.466574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.466843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.466975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.467125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.467211 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.569554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.569864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.569982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.570080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.570167 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.673144 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.673189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.673205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.673229 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.673247 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.776079 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.776125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.776135 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.776149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.776157 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.878132 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.878177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.878187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.878203 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.878213 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.981624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.981972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.982075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.982200 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:47 crc kubenswrapper[4806]: I1125 14:53:47.982295 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:47Z","lastTransitionTime":"2025-11-25T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.084046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.084083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.084094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.084110 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.084121 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.088207 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:48 crc kubenswrapper[4806]: E1125 14:53:48.089252 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.107004 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.118549 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.131165 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.145247 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.156277 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.165815 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.177882 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.187042 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.187099 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.187109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.187126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.187135 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.194279 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.211856 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.224174 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.241114 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.257244 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.269346 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.287499 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.290526 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.290573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.290584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.290603 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.290618 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.304475 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.316391 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.328437 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.393793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.393888 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.393915 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.393952 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.393980 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.497281 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.497336 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.497346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.497363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.497374 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.600771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.600831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.600842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.600860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.600870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.702717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.702770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.702782 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.702798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.702810 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.805189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.805240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.805253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.805271 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.805281 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.907740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.907793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.907805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.907823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:48 crc kubenswrapper[4806]: I1125 14:53:48.907835 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:48Z","lastTransitionTime":"2025-11-25T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.013515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.013577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.013590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.013611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.013625 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.089199 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.089275 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.089377 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:49 crc kubenswrapper[4806]: E1125 14:53:49.089371 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:49 crc kubenswrapper[4806]: E1125 14:53:49.089517 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:49 crc kubenswrapper[4806]: E1125 14:53:49.089573 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.116017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.116075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.116089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.116109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.116121 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.217763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.217812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.217821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.217834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.217844 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.320482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.320520 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.320536 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.320552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.320561 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.423972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.424490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.424510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.424537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.424556 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.527376 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.527425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.527434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.527452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.527462 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.630730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.630780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.630789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.630808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.630820 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.733989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.734038 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.734047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.734065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.734077 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.836872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.836914 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.836925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.836939 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.836949 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.939785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.939850 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.939860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.939884 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:49 crc kubenswrapper[4806]: I1125 14:53:49.939899 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:49Z","lastTransitionTime":"2025-11-25T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.042340 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.042391 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.042408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.042426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.042437 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.088386 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:50 crc kubenswrapper[4806]: E1125 14:53:50.088651 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.144149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.144205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.144214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.144227 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.144237 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.246499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.246563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.246576 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.246600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.246617 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.348865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.348902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.348912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.348926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.348935 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.453193 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.453270 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.453280 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.453304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.453334 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.555865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.555907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.555918 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.555934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.555947 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.658456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.658501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.658515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.658531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.658542 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.761025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.761059 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.761069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.761084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.761096 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.864295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.864347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.864357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.864371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.864381 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.967040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.967089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.967102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.967118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:50 crc kubenswrapper[4806]: I1125 14:53:50.967127 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:50Z","lastTransitionTime":"2025-11-25T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.069655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.069708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.069720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.069736 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.069747 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.088766 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.088807 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:51 crc kubenswrapper[4806]: E1125 14:53:51.088929 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.089015 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:51 crc kubenswrapper[4806]: E1125 14:53:51.089208 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:51 crc kubenswrapper[4806]: E1125 14:53:51.089425 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.172549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.172597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.172610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.172626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.172636 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.275000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.275047 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.275065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.275086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.275098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.376870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.376972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.376987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.377004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.377018 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.479807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.479841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.479850 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.479862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.479870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.582260 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.582305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.582338 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.582355 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.582366 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.688127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.688185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.688198 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.688215 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.688233 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.790912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.790966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.790983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.791000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.791010 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.893527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.893586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.893595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.893608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.893617 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.995972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.996016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.996027 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.996046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:51 crc kubenswrapper[4806]: I1125 14:53:51.996057 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:51Z","lastTransitionTime":"2025-11-25T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.089403 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:52 crc kubenswrapper[4806]: E1125 14:53:52.089566 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.098902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.098940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.098951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.098969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.098981 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.201638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.201670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.201677 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.201690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.201699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.304594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.304638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.304647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.304665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.304678 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.407891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.407942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.407955 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.407978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.407995 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.510032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.510092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.510105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.510125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.510139 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.612172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.612224 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.612236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.612253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.612263 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.714657 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.714702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.714712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.714728 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.714742 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.817679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.817734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.817748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.817770 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.817785 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.920740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.920781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.920792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.920809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:52 crc kubenswrapper[4806]: I1125 14:53:52.920820 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:52Z","lastTransitionTime":"2025-11-25T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.023810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.023863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.023872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.023893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.023904 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.088987 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.089062 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.089018 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:53 crc kubenswrapper[4806]: E1125 14:53:53.089129 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:53 crc kubenswrapper[4806]: E1125 14:53:53.089239 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:53 crc kubenswrapper[4806]: E1125 14:53:53.089506 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.126390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.126429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.126440 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.126458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.126472 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.229475 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.229551 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.229560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.229575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.229585 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.331645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.331686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.331697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.331731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.331742 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.433670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.433711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.433721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.433735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.433745 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.536184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.536231 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.536240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.536254 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.536262 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.638954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.639004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.639017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.639035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.639045 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.741086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.741129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.741138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.741152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.741165 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.842960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.843000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.843010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.843027 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.843038 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.945799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.945860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.945871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.945887 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:53 crc kubenswrapper[4806]: I1125 14:53:53.945898 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:53Z","lastTransitionTime":"2025-11-25T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.048076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.048136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.048147 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.048164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.048177 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.089228 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.089722 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.090016 4806 scope.go:117] "RemoveContainer" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.090266 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.150357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.150396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.150405 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.150420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.150435 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.252392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.252624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.252633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.252646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.252654 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.354995 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.355043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.355055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.355071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.355080 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.456730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.456792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.456800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.456816 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.456826 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.534626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.534670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.534678 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.534694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.534704 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.547801 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:54Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.551532 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.551575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.551586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.551606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.551618 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.565189 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:54Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.568492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.568514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.568521 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.568536 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.568546 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.579900 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:54Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.584346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.584392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.584403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.584421 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.584432 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.596713 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:54Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.600900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.600937 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.600963 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.600980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.600991 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.614094 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:54Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:54 crc kubenswrapper[4806]: E1125 14:53:54.614276 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.615620 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.615659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.615670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.615689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.615701 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.717925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.717980 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.717991 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.718013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.718028 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.820639 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.820683 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.820695 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.820715 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.820727 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.922765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.922808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.922818 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.922831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.922841 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:54Z","lastTransitionTime":"2025-11-25T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:54 crc kubenswrapper[4806]: I1125 14:53:54.999891 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:55 crc kubenswrapper[4806]: E1125 14:53:55.000081 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:55 crc kubenswrapper[4806]: E1125 14:53:55.000175 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:54:27.000153264 +0000 UTC m=+99.652295685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.024836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.024879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.024889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.024902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.024912 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.088559 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.088591 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.088559 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:55 crc kubenswrapper[4806]: E1125 14:53:55.088685 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:55 crc kubenswrapper[4806]: E1125 14:53:55.088749 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:55 crc kubenswrapper[4806]: E1125 14:53:55.088799 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.127642 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.127686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.127701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.127720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.127731 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.229735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.229774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.229785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.229800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.229810 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.332599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.332643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.332653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.332668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.332680 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.435570 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.435670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.435701 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.435739 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.435764 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.538784 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.538856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.538879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.538911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.538936 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.642211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.642271 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.642290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.642346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.642369 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.745719 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.745789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.745809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.745839 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.745859 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.849353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.849398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.849410 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.849437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.849452 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.952230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.952299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.952335 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.952365 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:55 crc kubenswrapper[4806]: I1125 14:53:55.952380 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:55Z","lastTransitionTime":"2025-11-25T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.055512 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.055582 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.055593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.055613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.055626 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.088608 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:56 crc kubenswrapper[4806]: E1125 14:53:56.088808 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.159670 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.159724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.159740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.159760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.159774 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.261920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.261961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.261972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.261986 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.262003 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.363753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.363798 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.363809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.363830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.363842 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.465564 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.465954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.466113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.466383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.466556 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.569656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.569699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.569708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.569725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.569735 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.672454 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.672532 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.672550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.672585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.672622 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.776693 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.776755 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.776766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.776786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.776799 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.879369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.879413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.879423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.879447 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.879458 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.981763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.981833 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.981842 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.981859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:56 crc kubenswrapper[4806]: I1125 14:53:56.981870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:56Z","lastTransitionTime":"2025-11-25T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.084002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.084328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.084429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.084535 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.084604 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.088388 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.088412 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.088469 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:57 crc kubenswrapper[4806]: E1125 14:53:57.088536 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:57 crc kubenswrapper[4806]: E1125 14:53:57.088614 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:57 crc kubenswrapper[4806]: E1125 14:53:57.088687 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.187438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.187482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.187492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.187508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.187520 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.290031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.290415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.290513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.290881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.290968 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.393711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.393751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.393761 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.393786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.393794 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.467420 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/0.log" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.467687 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b7ddd20-62b7-4687-9982-83cf1cbac3ab" containerID="a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986" exitCode=1 Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.467803 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerDied","Data":"a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.468360 4806 scope.go:117] "RemoveContainer" containerID="a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.490098 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.496153 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.496354 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.496445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.496509 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.496572 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.505065 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.516879 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.527346 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.537578 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.549002 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.559096 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.570744 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.584004 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.595044 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.598703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.598743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.598752 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.598767 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.598775 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.606701 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.619650 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.632064 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.642581 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.654933 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.667033 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.680759 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:57Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.700692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.700740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.700752 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.700776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.700792 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.802752 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.802785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.802793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.802806 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.802814 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.904793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.904836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.904845 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.904860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:57 crc kubenswrapper[4806]: I1125 14:53:57.904869 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:57Z","lastTransitionTime":"2025-11-25T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.007625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.007664 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.007675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.007694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.007705 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.089229 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:53:58 crc kubenswrapper[4806]: E1125 14:53:58.089481 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.101210 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.109632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.109666 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.109674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.109688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.109698 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.113472 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.126533 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.138502 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.152327 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.162679 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.172046 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.182678 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.195718 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.206950 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.211858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.212018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.212089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.212160 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.212260 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.219683 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.231003 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.243457 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.263004 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.279287 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.293150 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.306509 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.314423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.314513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.314527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.314545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.314558 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.416573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.416626 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.416637 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.416655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.416664 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.473192 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/0.log" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.473258 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerStarted","Data":"6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.487726 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.509666 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.518585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.518651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.518671 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.518689 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.518702 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.528486 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.539504 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.551641 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.562614 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.573591 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.584276 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.595825 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.609541 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.619039 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.621496 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.621523 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.621533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.621546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.621556 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.629451 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.640684 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.651977 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.662335 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.673823 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.685721 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:53:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.723448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.723489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.723499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.723514 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.723526 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.826100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.826137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.826146 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.826160 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.826170 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.929019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.929152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.929171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.929192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:58 crc kubenswrapper[4806]: I1125 14:53:58.929203 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:58Z","lastTransitionTime":"2025-11-25T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.031641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.031706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.031718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.031733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.031744 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.088415 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.088501 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:53:59 crc kubenswrapper[4806]: E1125 14:53:59.088561 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.088500 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:53:59 crc kubenswrapper[4806]: E1125 14:53:59.088669 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:53:59 crc kubenswrapper[4806]: E1125 14:53:59.088735 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.134057 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.134119 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.134128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.134145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.134158 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.237225 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.237276 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.237288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.237306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.237346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.340815 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.340853 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.340862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.340885 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.340896 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.444021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.444066 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.444075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.444092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.444101 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.547740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.547823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.547851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.547890 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.547917 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.651400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.651451 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.651462 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.651481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.651492 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.754121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.754175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.754184 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.754197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.754205 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.856811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.856856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.856868 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.856893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.856907 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.959922 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.959953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.959961 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.959977 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:53:59 crc kubenswrapper[4806]: I1125 14:53:59.959991 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:53:59Z","lastTransitionTime":"2025-11-25T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.061766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.061811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.061824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.061841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.061852 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.089676 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:00 crc kubenswrapper[4806]: E1125 14:54:00.089774 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.101762 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.164057 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.164095 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.164106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.164124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.164136 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.266010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.266081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.266091 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.266103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.266112 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.368732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.368787 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.368800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.368828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.368841 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.471304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.471381 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.471414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.471433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.471445 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.573832 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.573879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.573891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.573911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.573923 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.676374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.676437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.676450 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.676473 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.676486 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.779119 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.779167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.779180 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.779197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.779208 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.881531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.881574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.881587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.881605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.881615 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.983568 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.983601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.983610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.983623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:00 crc kubenswrapper[4806]: I1125 14:54:00.983633 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:00Z","lastTransitionTime":"2025-11-25T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.086028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.086060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.086069 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.086083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.086092 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.088538 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.088556 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:01 crc kubenswrapper[4806]: E1125 14:54:01.088621 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:01 crc kubenswrapper[4806]: E1125 14:54:01.088696 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.088538 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:01 crc kubenswrapper[4806]: E1125 14:54:01.088761 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.188650 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.188690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.188702 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.188721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.188732 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.291460 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.291510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.291521 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.291542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.291557 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.394597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.394668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.394679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.394705 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.394719 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.496694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.497162 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.497240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.497370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.497466 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.600020 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.600088 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.600100 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.600116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.600390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.702282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.702554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.702625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.702692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.702748 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.804737 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.804779 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.804792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.804808 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.804818 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.906517 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.906565 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.906579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.906596 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:01 crc kubenswrapper[4806]: I1125 14:54:01.906607 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:01Z","lastTransitionTime":"2025-11-25T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.009086 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.009129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.009138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.009156 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.009168 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.089326 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:02 crc kubenswrapper[4806]: E1125 14:54:02.089518 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.111360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.111403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.111414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.111428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.111438 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.213617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.213673 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.213686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.213706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.213718 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.317031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.317111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.317197 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.317219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.317245 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.420598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.420865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.420966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.421055 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.421143 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.524062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.524115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.524131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.524149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.524162 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.626161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.626243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.626255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.626272 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.626281 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.728949 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.729002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.729011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.729026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.729035 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.831769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.831818 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.831827 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.831844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.831856 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.934599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.934648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.934658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.934674 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:02 crc kubenswrapper[4806]: I1125 14:54:02.934684 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:02Z","lastTransitionTime":"2025-11-25T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.037672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.038178 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.038392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.038554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.038665 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.088562 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.088629 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:03 crc kubenswrapper[4806]: E1125 14:54:03.088770 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:03 crc kubenswrapper[4806]: E1125 14:54:03.088857 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.088602 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:03 crc kubenswrapper[4806]: E1125 14:54:03.089464 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.140279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.140353 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.140364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.140380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.140390 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.242893 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.242934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.242946 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.242967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.242979 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.344814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.344859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.344883 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.344901 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.344911 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.446785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.446837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.446848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.446863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.446873 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.549142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.549177 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.549189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.549204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.549214 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.651372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.651406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.651416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.651429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.651437 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.753790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.753848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.753863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.753882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.753895 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.856206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.856238 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.856249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.856264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.856274 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.958004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.958301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.958426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.958495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:03 crc kubenswrapper[4806]: I1125 14:54:03.958567 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:03Z","lastTransitionTime":"2025-11-25T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.061489 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.061524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.061533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.061547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.061556 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.088896 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.089199 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.164025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.164063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.164073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.164087 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.164098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.267367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.267414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.267428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.267446 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.267457 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.369777 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.369824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.369836 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.369854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.369868 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.471762 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.471803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.471811 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.471823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.471831 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.574478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.574521 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.574532 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.574550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.574560 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.676720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.676760 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.676771 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.676790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.676802 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.779487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.779523 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.779533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.779548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.779559 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.881913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.881940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.881948 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.881962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.881970 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.906124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.906156 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.906165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.906180 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.906190 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.919285 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.923501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.923550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.923569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.923594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.923610 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.937370 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.940641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.940703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.940724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.940751 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.940773 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.957218 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.962930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.962997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.963021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.963051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.963073 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.977871 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.981617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.981646 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.981655 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.981669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.981677 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.995546 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:04Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:04 crc kubenswrapper[4806]: E1125 14:54:04.995773 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.997638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.997692 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.997704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.997720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:04 crc kubenswrapper[4806]: I1125 14:54:04.997731 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:04Z","lastTransitionTime":"2025-11-25T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.089082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.089171 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:05 crc kubenswrapper[4806]: E1125 14:54:05.089215 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.089171 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:05 crc kubenswrapper[4806]: E1125 14:54:05.089309 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:05 crc kubenswrapper[4806]: E1125 14:54:05.089442 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.101058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.101105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.101116 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.101133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.101145 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.203334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.203377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.203385 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.203404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.203415 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.305848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.306105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.306172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.306253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.306336 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.408544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.408772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.408865 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.408926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.408990 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.510978 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.511021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.511031 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.511045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.511055 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.613369 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.613401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.613409 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.613422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.613432 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.715371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.715422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.715432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.715448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.715458 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.817611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.817649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.817658 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.817672 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.817681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.920063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.920109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.920122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.920140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:05 crc kubenswrapper[4806]: I1125 14:54:05.920151 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:05Z","lastTransitionTime":"2025-11-25T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.022641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.022685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.022696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.022712 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.022723 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.088355 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:06 crc kubenswrapper[4806]: E1125 14:54:06.088500 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.125286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.125357 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.125368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.125386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.125399 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.227711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.227753 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.227763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.227776 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.227787 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.329969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.330011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.330019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.330035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.330045 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.433004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.433040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.433050 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.433062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.433071 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.534899 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.534940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.534951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.534966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.534974 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.637105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.637143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.637152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.637165 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.637173 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.739133 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.739161 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.739168 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.739181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.739191 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.842092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.842127 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.842136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.842149 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.842158 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.945117 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.945189 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.945211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.945242 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:06 crc kubenswrapper[4806]: I1125 14:54:06.945262 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:06Z","lastTransitionTime":"2025-11-25T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.047708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.047763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.047780 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.047803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.047819 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.089097 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.089178 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.089097 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:07 crc kubenswrapper[4806]: E1125 14:54:07.089277 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:07 crc kubenswrapper[4806]: E1125 14:54:07.089395 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:07 crc kubenswrapper[4806]: E1125 14:54:07.089911 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.090355 4806 scope.go:117] "RemoveContainer" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.151142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.151302 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.151414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.151488 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.151551 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.254601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.254648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.254659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.254675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.254687 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.357652 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.357697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.357708 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.357721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.357730 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.460483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.460527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.460537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.460553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.460567 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.501538 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/2.log" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.504372 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.504779 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.519488 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.533409 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.545837 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.563591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.563633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.563641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.563656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.563667 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.564175 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.581736 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.595020 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.611193 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.633022 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.645267 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.656159 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.665716 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.665750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.665765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.665792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.665804 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.669284 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.681096 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.703981 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.722239 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.756049 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.767985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.768028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.768037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.768089 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.768098 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.775711 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.788753 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.802264 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:07Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.870735 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.870772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.870781 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.870794 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.870805 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.973350 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.973400 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.973411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.973428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:07 crc kubenswrapper[4806]: I1125 14:54:07.973439 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:07Z","lastTransitionTime":"2025-11-25T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.075917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.075960 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.075971 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.075985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.075994 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.088149 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:08 crc kubenswrapper[4806]: E1125 14:54:08.088271 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.100430 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.109210 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.119632 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.130245 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.142886 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.160630 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.173746 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.183613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.183656 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.183667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.183687 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.183699 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.188917 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.200238 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.214252 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.223404 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.234776 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.247557 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.259197 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.272736 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.282792 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.286367 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.286403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.286419 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.286441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.286458 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.296644 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.309396 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.388567 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.388605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.388616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.388630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.388640 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.490130 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.490181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.490194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.490211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.490222 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.507851 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/3.log" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.508470 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/2.log" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.510883 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" exitCode=1 Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.510938 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.510974 4806 scope.go:117] "RemoveContainer" containerID="62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.511949 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:54:08 crc kubenswrapper[4806]: E1125 14:54:08.514163 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.525346 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.541663 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.552126 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.561626 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.572504 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.584123 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.592864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.592900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.592912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.592928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.592940 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.594583 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.606064 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.617051 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.627590 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.644564 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62b70f432c6dd0b1c618daa1a0ab62a0cb297db059eca02b5eea3ba2ab687166\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:35Z\\\",\\\"message\\\":\\\"ler.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:53:35.941874 6451 services_controller.go:360] Finished syncing service oauth-openshift on namespace openshift-authentication for network=default : 1.676717ms\\\\nI1125 14:53:35.941890 6451 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1125 14:53:35.941286 6451 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:53:35.941952 6451 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI1125 14:53:35.941307 6451 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-additional-cni-plugins-zt8m9\\\\nF1125 14:53:35.941961 6451 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling we\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"0 2025-02-23 05:23:11 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.222,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.222],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 14:54:08.009395 6843 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 14:54:08.009407 6843 ovn.go:134] Ensuring zone local for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.659425 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.671272 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.683661 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.693060 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.694916 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.695060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.695141 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.695233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.695309 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.702048 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.714513 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.725990 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:08Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.798411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.798456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.798466 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.798481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.798494 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.901249 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.901337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.901350 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.901366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:08 crc kubenswrapper[4806]: I1125 14:54:08.901378 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:08Z","lastTransitionTime":"2025-11-25T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.003247 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.003290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.003301 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.003333 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.003343 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.089087 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.089139 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:09 crc kubenswrapper[4806]: E1125 14:54:09.089212 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:09 crc kubenswrapper[4806]: E1125 14:54:09.089282 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.089110 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:09 crc kubenswrapper[4806]: E1125 14:54:09.089393 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.105397 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.105430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.105439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.105452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.105461 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.208375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.208478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.208487 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.208504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.208514 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.311251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.311288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.311304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.311343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.311355 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.414651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.414768 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.414793 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.414822 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.414845 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516218 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/3.log" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516650 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516700 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.516735 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.520286 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:54:09 crc kubenswrapper[4806]: E1125 14:54:09.520529 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.534180 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.547630 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.561037 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.575735 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.586141 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.599544 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.612751 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.618869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.619071 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.619175 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.619347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.619478 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.625357 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.635753 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.650095 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.663715 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.678625 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.698885 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"0 2025-02-23 05:23:11 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.222,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.222],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 14:54:08.009395 6843 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 14:54:08.009407 6843 ovn.go:134] Ensuring zone local for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.712888 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.721898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.721928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.721954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.721971 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.721980 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.726902 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.738538 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.749992 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.760689 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:09Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.824848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.824909 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.824920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.824935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.824943 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.927744 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.927791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.927803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.927820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:09 crc kubenswrapper[4806]: I1125 14:54:09.927832 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:09Z","lastTransitionTime":"2025-11-25T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.029891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.029925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.029934 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.029947 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.029956 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.089011 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.089147 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.132541 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.132849 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.132861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.132877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.132887 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.235306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.235371 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.235388 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.235413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.235423 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.337724 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.337764 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.337775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.337792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.337802 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.439912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.439944 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.439951 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.439964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.439973 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.541651 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.541703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.541725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.541748 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.541767 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.644483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.644742 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.644813 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.644879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.644940 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.747691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.748204 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.748305 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.748438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.748510 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.767414 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.767593 4806 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.767661 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:14.767643876 +0000 UTC m=+147.419786287 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.851679 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.851955 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.852044 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.852132 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.852222 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.868064 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868181 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:14.86815523 +0000 UTC m=+147.520297641 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.868238 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.868283 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.868358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868392 4806 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868430 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868445 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868449 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868458 4806 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868467 4806 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868468 4806 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868448 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:14.868435798 +0000 UTC m=+147.520578209 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868514 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:14.868503069 +0000 UTC m=+147.520645480 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:10 crc kubenswrapper[4806]: E1125 14:54:10.868526 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:14.86852127 +0000 UTC m=+147.520663681 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.955295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.955373 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.955383 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.955397 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:10 crc kubenswrapper[4806]: I1125 14:54:10.955406 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:10Z","lastTransitionTime":"2025-11-25T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.057923 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.057993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.058005 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.058044 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.058057 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.088732 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.088783 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.088817 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:11 crc kubenswrapper[4806]: E1125 14:54:11.088871 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:11 crc kubenswrapper[4806]: E1125 14:54:11.088957 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:11 crc kubenswrapper[4806]: E1125 14:54:11.089162 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.161226 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.161930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.161962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.161989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.162003 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.264476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.264543 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.264556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.264576 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.264591 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.368121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.368172 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.368185 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.368205 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.368218 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.470554 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.470617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.470636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.470664 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.470686 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.573508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.573560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.573573 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.573594 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.573606 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.676201 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.676250 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.676259 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.676275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.676285 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.779673 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.779714 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.779725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.779743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.779753 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.882243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.882287 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.882297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.882334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.882354 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.984288 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.984352 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.984363 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.984378 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:11 crc kubenswrapper[4806]: I1125 14:54:11.984387 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:11Z","lastTransitionTime":"2025-11-25T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.086818 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.086902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.086917 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.086940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.086953 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.089122 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:12 crc kubenswrapper[4806]: E1125 14:54:12.089233 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.189769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.189814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.189823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.189838 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.189848 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.292349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.292403 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.292416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.292434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.292443 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.394732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.394785 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.394797 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.394814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.394829 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.497187 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.497470 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.497549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.497620 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.497681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.600482 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.600779 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.600881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.600966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.601030 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.703889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.704374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.704476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.704585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.704649 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.806870 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.806930 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.806938 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.806953 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.806979 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.909166 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.909218 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.909233 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.909252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:12 crc kubenswrapper[4806]: I1125 14:54:12.909263 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:12Z","lastTransitionTime":"2025-11-25T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.011730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.011765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.011775 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.011791 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.011801 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.088937 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.088934 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:13 crc kubenswrapper[4806]: E1125 14:54:13.089091 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:13 crc kubenswrapper[4806]: E1125 14:54:13.089172 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.088958 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:13 crc kubenswrapper[4806]: E1125 14:54:13.089263 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.113663 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.113697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.113709 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.113725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.113734 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.216150 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.216191 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.216202 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.216220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.216231 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.318196 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.318244 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.318254 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.318275 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.318287 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.424385 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.424445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.424458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.424478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.424491 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.527304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.527597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.527694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.527792 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.528065 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.630552 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.630595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.630605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.630618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.630629 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.732343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.732389 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.732401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.732416 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.732428 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.835049 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.836068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.836155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.836226 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.836299 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.938641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.938935 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.939083 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.939220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:13 crc kubenswrapper[4806]: I1125 14:54:13.939302 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:13Z","lastTransitionTime":"2025-11-25T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.042148 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.042194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.042202 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.042218 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.042227 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.088729 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:14 crc kubenswrapper[4806]: E1125 14:54:14.089172 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.144221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.144256 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.144264 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.144277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.144286 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.246530 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.246574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.246584 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.246599 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.246608 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.349253 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.349595 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.349667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.349741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.349801 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.451907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.451975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.451992 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.452016 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.452033 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.555112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.555166 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.555176 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.555192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.555203 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.657235 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.657284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.657295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.657328 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.657337 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.760200 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.760597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.760729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.760862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.760995 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.864068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.864628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.864696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.864786 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.864853 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.967025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.967124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.967137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.967156 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:14 crc kubenswrapper[4806]: I1125 14:54:14.967168 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:14Z","lastTransitionTime":"2025-11-25T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.069999 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.070032 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.070040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.070052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.070060 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.088760 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.088941 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.089100 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.089159 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.089336 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.089563 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.172597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.172634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.172643 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.172660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.172669 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.275137 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.275442 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.275531 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.275618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.275713 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.377627 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.377665 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.377676 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.377691 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.377702 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.380587 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.380611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.380622 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.380634 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.380644 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.391204 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.394255 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.394422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.394513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.394601 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.394680 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.405973 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.409696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.409830 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.409913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.410004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.410086 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.421817 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.425045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.425087 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.425107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.425128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.425139 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.437819 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.441192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.441230 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.441274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.441292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.441304 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.452582 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:15 crc kubenswrapper[4806]: E1125 14:54:15.452701 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.481246 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.481296 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.481326 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.481346 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.481357 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.583486 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.583856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.583924 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.583990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.584062 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.686504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.686545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.686556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.686572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.686583 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.788841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.789060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.789139 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.789208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.789268 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.892033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.892082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.892094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.892112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.892124 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.994704 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.994778 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.994790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.994807 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:15 crc kubenswrapper[4806]: I1125 14:54:15.994818 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:15Z","lastTransitionTime":"2025-11-25T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.088442 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:16 crc kubenswrapper[4806]: E1125 14:54:16.088660 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.096420 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.096461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.096476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.096525 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.096541 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.199221 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.199266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.199277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.199293 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.199304 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.301824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.301875 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.301892 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.301910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.301922 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.404623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.404682 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.404698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.404721 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.404740 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.507060 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.507114 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.507145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.507164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.507177 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.610463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.610527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.610544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.610569 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.610587 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.712803 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.712839 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.712848 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.712861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.712870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.815404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.815477 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.815499 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.815527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.815542 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.917470 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.917516 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.917524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.917539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:16 crc kubenswrapper[4806]: I1125 14:54:16.917548 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:16Z","lastTransitionTime":"2025-11-25T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.020304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.020390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.020407 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.020432 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.020447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.088398 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.088438 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:17 crc kubenswrapper[4806]: E1125 14:54:17.088531 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:17 crc kubenswrapper[4806]: E1125 14:54:17.088706 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.088919 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:17 crc kubenswrapper[4806]: E1125 14:54:17.089144 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.123128 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.123210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.123219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.123234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.123244 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.225536 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.225614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.225625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.225669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.225681 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.328023 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.328063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.328073 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.328088 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.328099 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.430541 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.430581 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.430590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.430604 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.430612 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.532649 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.532685 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.532694 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.532707 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.532715 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.635131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.635179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.635214 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.635231 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.635241 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.738192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.738485 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.738628 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.738734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.738808 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.841054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.841115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.841124 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.841140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.841152 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.943726 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.943789 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.943802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.943820 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:17 crc kubenswrapper[4806]: I1125 14:54:17.943829 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:17Z","lastTransitionTime":"2025-11-25T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.046270 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.046341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.046358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.046380 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.046391 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.089445 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:18 crc kubenswrapper[4806]: E1125 14:54:18.089685 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.111913 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.127948 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.143201 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.149058 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.149412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.149540 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.149645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.149754 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.167332 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"0 2025-02-23 05:23:11 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.222,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.222],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 14:54:08.009395 6843 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 14:54:08.009407 6843 ovn.go:134] Ensuring zone local for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.181334 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.194956 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.211024 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.224577 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.236660 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.252097 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.254004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.254051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.254064 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.254085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.254095 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.266881 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.281641 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.297710 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.310596 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.323873 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.336995 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.351440 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.356530 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.356592 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.356605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.356625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.356639 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.362345 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:18Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.459690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.459998 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.460122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.460248 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.460384 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.562228 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.562289 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.562300 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.562329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.562339 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.664632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.664860 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.664958 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.665025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.665086 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.766985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.767068 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.767080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.767106 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.767118 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.869871 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.869900 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.869907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.869920 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.869928 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.971945 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.971981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.971989 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.972002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:18 crc kubenswrapper[4806]: I1125 14:54:18.972012 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:18Z","lastTransitionTime":"2025-11-25T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.074484 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.074527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.074537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.074553 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.074568 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.088763 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:19 crc kubenswrapper[4806]: E1125 14:54:19.088945 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.088786 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:19 crc kubenswrapper[4806]: E1125 14:54:19.089058 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.088782 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:19 crc kubenswrapper[4806]: E1125 14:54:19.089128 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.177406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.177715 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.177826 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.177925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.178026 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.280372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.280415 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.280423 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.280437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.280447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.382575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.382616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.382625 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.382638 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.382647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.485033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.485072 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.485081 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.485093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.485102 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.587062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.587103 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.587115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.587131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.587142 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.689640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.689908 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.689971 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.690053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.690130 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.791967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.792013 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.792022 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.792035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.792045 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.894579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.894613 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.894629 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.894644 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.894654 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.996855 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.996941 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.996967 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.996985 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:19 crc kubenswrapper[4806]: I1125 14:54:19.996995 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:19Z","lastTransitionTime":"2025-11-25T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.089164 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:20 crc kubenswrapper[4806]: E1125 14:54:20.089331 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.099131 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.099164 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.099173 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.099206 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.099221 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.202043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.202101 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.202111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.202126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.202135 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.305412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.305490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.305504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.305524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.305557 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.408034 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.408070 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.408082 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.408097 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.408107 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.510458 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.510520 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.510537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.510562 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.510579 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.613483 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.613540 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.613549 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.613566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.613576 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.715824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.715869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.715882 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.715898 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.715911 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.818306 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.818392 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.818406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.818429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.818442 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.921186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.921234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.921246 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.921262 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:20 crc kubenswrapper[4806]: I1125 14:54:20.921273 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:20Z","lastTransitionTime":"2025-11-25T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.023987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.024035 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.024046 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.024063 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.024075 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.088994 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.089009 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.089207 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:21 crc kubenswrapper[4806]: E1125 14:54:21.089422 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:21 crc kubenswrapper[4806]: E1125 14:54:21.089532 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:21 crc kubenswrapper[4806]: E1125 14:54:21.089720 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.126180 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.126226 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.126235 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.126252 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.126265 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.228290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.228360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.228372 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.228390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.228401 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.330465 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.330510 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.330521 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.330537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.330547 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.433265 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.433329 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.433342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.433359 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.433371 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.535754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.535800 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.535810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.535824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:21 crc kubenswrapper[4806]: I1125 14:54:21.535835 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:21Z","lastTransitionTime":"2025-11-25T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.390161 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.390226 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:22 crc kubenswrapper[4806]: E1125 14:54:22.390412 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.390576 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:22 crc kubenswrapper[4806]: E1125 14:54:22.390622 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:22 crc kubenswrapper[4806]: E1125 14:54:22.390895 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.391079 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:22 crc kubenswrapper[4806]: E1125 14:54:22.391141 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.391228 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:54:22 crc kubenswrapper[4806]: E1125 14:54:22.391409 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.398783 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.398829 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.398843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.398857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.398870 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.500981 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.501017 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.501026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.501039 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.501048 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.603631 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.603699 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.603711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.603731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.603744 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.705347 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.705660 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.705734 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.705816 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.705892 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.808067 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.808111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.808120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.808136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.808146 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.910608 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.910840 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.910907 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.911019 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:22 crc kubenswrapper[4806]: I1125 14:54:22.911099 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:22Z","lastTransitionTime":"2025-11-25T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.013534 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.013577 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.013586 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.013600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.013610 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.115788 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.116065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.116159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.116254 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.116361 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.218563 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.218607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.218616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.218630 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.218638 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.321910 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.322274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.322386 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.322476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.322555 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.425757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.425805 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.425814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.425831 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.425843 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.528257 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.528520 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.528617 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.528690 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.528745 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.631782 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.631858 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.631872 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.631896 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.631911 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.735208 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.735279 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.735292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.735332 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.735350 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.838026 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.838085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.838096 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.838113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.838127 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.940342 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.940404 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.940414 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.940430 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:23 crc kubenswrapper[4806]: I1125 14:54:23.940442 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:23Z","lastTransitionTime":"2025-11-25T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.042503 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.042560 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.042572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.042590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.042601 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.089049 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.089146 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.089082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.089082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:24 crc kubenswrapper[4806]: E1125 14:54:24.089247 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:24 crc kubenswrapper[4806]: E1125 14:54:24.089407 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:24 crc kubenswrapper[4806]: E1125 14:54:24.089494 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:24 crc kubenswrapper[4806]: E1125 14:54:24.089572 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.145126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.145182 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.145194 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.145211 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.145220 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.247966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.248025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.248037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.248052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.248062 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.350445 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.350591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.350600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.350632 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.350643 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.453074 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.453113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.453125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.453143 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.453154 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.555774 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.555824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.555834 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.555851 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.555861 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.660532 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.660639 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.660657 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.660839 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.660856 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.763051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.763109 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.763125 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.763145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.763158 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.865950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.865994 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.866007 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.866033 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.866061 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.968352 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.968401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.968417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.968437 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:24 crc kubenswrapper[4806]: I1125 14:54:24.968448 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:24Z","lastTransitionTime":"2025-11-25T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.070814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.070852 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.070861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.070874 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.070883 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.102131 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.173730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.173769 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.173777 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.173790 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.173798 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.276396 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.276452 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.276461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.276476 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.276485 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.378641 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.378675 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.378683 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.378696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.378704 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.481527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.482051 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.482145 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.482243 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.482363 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.584621 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.584686 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.584696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.584711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.584721 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.686912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.686957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.686969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.686984 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.686995 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.789240 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.789290 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.789304 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.789366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.789378 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.793393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.793417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.793426 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.793438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.793447 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.805437 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:25Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.808891 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.808954 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.808970 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.808993 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.809008 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.821474 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:25Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.826546 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.826593 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.826607 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.826627 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.826639 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.838478 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:25Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.843053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.843120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.843136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.843188 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.843212 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.854713 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:25Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.858478 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.858544 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.858556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.858574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.858588 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.870779 4806 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e0f8c346-5c5c-4b9e-86cb-75f930f1dadc\\\",\\\"systemUUID\\\":\\\"c9d50cce-a734-4456-ad77-ec687d096f9d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:25Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:25 crc kubenswrapper[4806]: E1125 14:54:25.871282 4806 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.891861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.891925 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.891941 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.891959 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.891969 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.994667 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.994706 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.994715 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.994733 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4806]: I1125 14:54:25.994746 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.088380 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.088522 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.088588 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:26 crc kubenswrapper[4806]: E1125 14:54:26.088728 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.088765 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:26 crc kubenswrapper[4806]: E1125 14:54:26.088803 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:26 crc kubenswrapper[4806]: E1125 14:54:26.088876 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:26 crc kubenswrapper[4806]: E1125 14:54:26.088530 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.096933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.096988 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.097003 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.097024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.097037 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.200232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.200284 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.200298 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.200339 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.200357 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.303648 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.303731 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.303743 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.303763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.303777 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.407234 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.407337 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.407349 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.407366 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.407378 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.510084 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.510134 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.510146 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.510167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.510185 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.612886 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.612968 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.612988 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.613018 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.613038 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.717152 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.717395 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.717411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.717434 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.717448 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.821142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.821210 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.821236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.821274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.821303 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.928740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.928837 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.928940 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.929080 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4806]: I1125 14:54:26.929095 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.032000 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.032201 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.032216 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.032237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.032579 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.048675 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:27 crc kubenswrapper[4806]: E1125 14:54:27.048833 4806 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:27 crc kubenswrapper[4806]: E1125 14:54:27.048896 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs podName:49e22ad0-2903-4ed0-94ad-40d713f99c9f nodeName:}" failed. No retries permitted until 2025-11-25 14:55:31.048876977 +0000 UTC m=+163.701019388 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs") pod "network-metrics-daemon-lsrxh" (UID: "49e22ad0-2903-4ed0-94ad-40d713f99c9f") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.135274 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.135310 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.135360 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.135374 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.135383 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.237653 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.237688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.237697 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.237711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.237722 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.340754 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.340843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.340859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.340881 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.340894 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.443958 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.444004 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.444015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.444036 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.444048 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.546356 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.546926 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.547093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.547181 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.547267 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.649542 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.649844 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.649908 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.649986 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.650080 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.753062 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.753408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.753511 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.753606 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.753693 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.855975 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.856037 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.856053 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.856075 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.856090 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.958297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.958368 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.958377 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.958393 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4806]: I1125 14:54:27.958402 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.073537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.073585 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.073598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.073614 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.073625 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.088778 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.088778 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:28 crc kubenswrapper[4806]: E1125 14:54:28.089119 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.088860 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:28 crc kubenswrapper[4806]: E1125 14:54:28.089211 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.088821 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:28 crc kubenswrapper[4806]: E1125 14:54:28.089271 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:28 crc kubenswrapper[4806]: E1125 14:54:28.089135 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.098562 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f2fd4bb5-248d-441b-b551-714801eed504\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad1444bd5d571c97e876be8a7806aa59a9e6777f78f11089042f1961ca237be2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://600d8f6e0a57ecf028d67a5d43177b039da18658131bbe103857578e826661a5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.109252 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-mwdqt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8b7ddd20-62b7-4687-9982-83cf1cbac3ab\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:53:56Z\\\",\\\"message\\\":\\\"2025-11-25T14:53:11+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77\\\\n2025-11-25T14:53:11+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eb5b5c65-6dbe-4817-a628-cca8eb1fda77 to /host/opt/cni/bin/\\\\n2025-11-25T14:53:11Z [verbose] multus-daemon started\\\\n2025-11-25T14:53:11Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:53:56Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dbntn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-mwdqt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.118641 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39baff20-1e9a-48b1-8872-155c5ad5931d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a777eee4ec1617baa0196c39314ff92a8111ee97fecdabcff51f8d23a732aed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-j5vqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kclf8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.128429 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a29a188-9022-41a4-8f1f-4a3274ffe3f9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82dc124b078217075b4e38f7b144af41d258e32283392fe2909cf227a9902012\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df53d52334de68ebecc9283d36720b9734a8a410af99e1ae3566979e52cb6f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fhrx4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:22Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-2mmdk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.147221 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"209f1f00-e913-443d-aa52-8ff07484c62e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ac553f30f4f8dd7c60374bbaa1c15e8a86e9a71697d1f177afe447834fca62b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4762b87c51132b134e282d5e1d3a6667995ced68dda62718fec7c82b609d5384\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9e2c951b933b2a390010f2acd690c1784e090f4b81b4d8beb0b88d243123776\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://04135545cf0fa5c64bef656a4945deaf5bed6405b64a69bc658489da4b47cf52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ade3ba711e7cfc39c2ee955a1f7e3c961f7e3000473ad5c77b62937085668291\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://42786026a931ea7a973df2aefff759ce04499a69c7d4c7436ada1bf6a5c714b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42786026a931ea7a973df2aefff759ce04499a69c7d4c7436ada1bf6a5c714b9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdd68927e45ba82bee23ce96be5f0c0093d313710e406148ebb4755ce9ec1bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdd68927e45ba82bee23ce96be5f0c0093d313710e406148ebb4755ce9ec1bcc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1705fefbe8122e10507afcf1389c91032695579d9ede2aa4ae50f00807cb1eca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1705fefbe8122e10507afcf1389c91032695579d9ede2aa4ae50f00807cb1eca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.161569 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"418e7888-2ed7-4d42-9100-527cff656249\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://41d39b5cec8b13a29be4b5cc55488b94bcb5a8882baebe3dd1b4783116e0d745\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a13f5b656df38c8be5558398c2d7b88f04a8c892edbd2cb06516aa94b3d4c71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c67538dcc66b71639bef32e5a359d899aeffb45958b74fce7d7c09f0874f59cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db9fb4fcadb881a8d1f35ac8df4c8b7654c07ea0c5ab061ef99c1396b9c1e76b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.173249 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd5caad4136ada520798bb17c547f59cb7ec6e5310c1e4737bf82dc97e1b992e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.175090 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.175118 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.175126 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.175140 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.175150 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.184493 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://08d31864859b0077dd2e586c093eb3865537fd5567ad2b55ff61ba39cd3ee56e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b60a5efe869e1d377161b70ad123fbf1bc06d0089c96eaffb7ae8129a796c76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.198171 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"228a80dc-3be5-4125-9d07-c8eb262a0eda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9fb517d9c8fca06d95f26ed65bbc78b53f6c555870af6ebd15afe2d5177f2d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9b9b9f592ff6c936f77a93cfce7bbdd22590743b9c1e795803ba329b13c8911\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://648a2b77c653ebe3b3082215dc92057a4c311d50f7de0716ee760d75726be5bd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://59625665de5594ed26bc8d074fc637108f28b6d761def945bc08c74dc1680011\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b6a5296e2fb8cb00d519a7df65422f6ebe34e260d721ae05375bb72a5ab772f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7085034ed67173ad8df12e7d9dad2ef7cf8fe1cbc1a6888125c14f1b8b472f81\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://687b9abfad692b7959b9303c122e3cce0b615636cf40e0796a22da717fd1b061\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jlt7n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zt8m9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.210974 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-6jcq2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9137647d-1ca0-49be-b482-8d04428e5325\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae80eda0f08f87065d8a30cf17d80f2be06e3b4a812798747aaed9cf6be1e82d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dttnk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:11Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-6jcq2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.222022 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"49e22ad0-2903-4ed0-94ad-40d713f99c9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cz9hg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:23Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lsrxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.232841 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35a68684-5473-4e76-bdb6-bc38a8640fed\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae860c1b5905b9a0634efdd777741a3837e3e3cae53fd559afcc23ceeeabbfe7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47127ed86ce2e9c91b3876f54c95e0df6039f549eaff06c28a9922f0b660aa53\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://560a6654dd1da05abbf52bb38ec0fd2ea0108fabdc14d4775dc3f8334cc9a1a9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.242686 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://47d8c7e405f12dbf2de2c53f2dc4309d02278887a2666a9ec491d490c5ce3f89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.254711 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.264147 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5lhpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57550f59-b31f-43c1-adca-565f246d4083\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b926db706959d0dcd4f9d9366913105ae3b70b4cb2da018ed9e778ceb84cc5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tfx48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:08Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5lhpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.276927 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.276969 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.276982 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.276997 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.277006 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.277120 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:52:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:52:51.509929 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:52:51.511569 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2779674049/tls.crt::/tmp/serving-cert-2779674049/tls.key\\\\\\\"\\\\nI1125 14:53:06.763374 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:53:06.765539 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:53:06.765558 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:53:06.765579 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:53:06.765584 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:53:06.773062 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:53:06.773084 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773089 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:53:06.773094 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:53:06.773097 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:53:06.773100 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:53:06.773103 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:53:06.773329 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:53:06.776065 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:52:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:52:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:52:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:52:48Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.293130 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.305605 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.324055 4806 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0fff40d8-fd9f-49da-953f-89894b4ef3a1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"0 2025-02-23 05:23:11 +0000 UTC \\\\u003cnil\\\\u003e \\\\u003cnil\\\\u003e map[app:oauth-openshift] map[operator.openshift.io/spec-hash:d9e6d53076d47ab2d123d8b1ba8ec6543488d973dcc4e02349493cd1c33bce83 service.alpha.openshift.io/serving-cert-secret-name:v4-0-config-system-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: oauth-openshift,},ClusterIP:10.217.4.222,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.222],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nI1125 14:54:08.009395 6843 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1125 14:54:08.009407 6843 ovn.go:134] Ensuring zone local for\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:53:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:53:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:53:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r9lvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:53:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-69wls\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.379861 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.379914 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.379928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.379950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.379967 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.482220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.482271 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.482282 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.482299 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.482325 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.584002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.584043 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.584052 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.584065 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.584074 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.685964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.686002 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.686011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.686024 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.686032 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.787765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.787796 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.787828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.787841 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.787849 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.890382 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.890449 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.890461 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.890481 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.890491 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.992358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.992406 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.992417 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.992433 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4806]: I1125 14:54:28.992445 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.094557 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.094591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.094598 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.094611 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.094621 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.196398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.196439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.196449 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.196463 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.196472 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.298801 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.298847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.298859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.298877 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.298887 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.400720 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.400749 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.400757 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.400772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.400790 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.503277 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.503341 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.503351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.503364 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.503374 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.605376 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.605413 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.605422 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.605435 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.605443 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.707571 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.707619 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.707631 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.707647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.707657 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.809535 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.809600 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.809610 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.809624 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.809634 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.911515 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.911550 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.911559 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.911574 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4806]: I1125 14:54:29.911582 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.014278 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.014334 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.014343 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.014358 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.014368 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.088945 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.088945 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.089004 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.089098 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:30 crc kubenswrapper[4806]: E1125 14:54:30.089218 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:30 crc kubenswrapper[4806]: E1125 14:54:30.089339 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:30 crc kubenswrapper[4806]: E1125 14:54:30.089388 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:30 crc kubenswrapper[4806]: E1125 14:54:30.089432 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.116129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.116163 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.116171 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.116182 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.116191 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.218876 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.218933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.218942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.218962 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.218976 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.321766 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.322536 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.322572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.322591 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.322604 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.425111 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.425155 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.425167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.425186 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.425196 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.527631 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.527710 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.527725 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.527763 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.527772 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.629928 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.629974 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.629983 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.629996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.630006 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.733232 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.733286 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.733295 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.733308 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.733346 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.836390 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.836429 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.836439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.836479 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.836496 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.939741 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.939787 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.939799 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.939814 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4806]: I1125 14:54:30.939823 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.042456 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.042508 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.042522 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.042537 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.042548 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.144812 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.144854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.144863 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.144879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.144889 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.246772 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.246828 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.246838 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.246856 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.246867 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.349539 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.349572 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.349579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.349592 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.349600 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.452493 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.452524 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.452533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.452545 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.452554 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.555167 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.555212 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.555220 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.555236 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.555245 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.658350 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.658411 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.658439 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.658457 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.658468 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.760810 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.760869 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.760879 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.760897 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.760910 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.863913 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.863987 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.864001 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.864021 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.864032 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.966912 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.966972 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.966990 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.967014 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4806]: I1125 14:54:31.967031 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.069547 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.069590 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.069602 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.069618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.069629 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.089186 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.089366 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:32 crc kubenswrapper[4806]: E1125 14:54:32.089532 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.089586 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.089621 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:32 crc kubenswrapper[4806]: E1125 14:54:32.090194 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:32 crc kubenswrapper[4806]: E1125 14:54:32.090292 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:32 crc kubenswrapper[4806]: E1125 14:54:32.089636 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.171633 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.171669 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.171681 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.171703 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.171719 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.274647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.274688 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.274698 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.274711 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.274720 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.377064 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.377121 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.377136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.377159 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.377171 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.479821 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.479862 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.479873 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.479889 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.479901 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.581924 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.581966 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.581976 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.581996 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.582007 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.684516 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.684566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.684575 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.684589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.684598 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.786843 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.786894 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.786911 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.786927 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.786939 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.889854 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.889931 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.889943 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.889958 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.889967 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.992490 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.992527 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.992538 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.992556 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4806]: I1125 14:54:32.992566 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.098492 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.098750 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.098823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.098902 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.098965 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.201076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.201113 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.201122 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.201135 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.201144 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.302824 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.302857 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.302864 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.302878 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.302887 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.405370 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.405647 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.405732 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.405809 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.405874 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.508136 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.508217 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.508237 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.508266 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.508286 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.611251 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.611297 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.611307 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.611351 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.611362 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.714740 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.714802 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.714823 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.714847 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.714866 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.817521 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.817583 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.817597 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.817618 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.817632 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.921623 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.921684 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.921696 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.921717 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4806]: I1125 14:54:33.921730 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.025015 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.025093 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.025112 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.025138 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.025157 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.088681 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.088801 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.089017 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:34 crc kubenswrapper[4806]: E1125 14:54:34.089571 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:34 crc kubenswrapper[4806]: E1125 14:54:34.089135 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.089126 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:34 crc kubenswrapper[4806]: E1125 14:54:34.089670 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:34 crc kubenswrapper[4806]: E1125 14:54:34.089810 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.127091 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.127292 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.127428 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.127548 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.127630 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.230659 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.230718 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.230729 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.230749 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.230764 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.333668 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.333730 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.333745 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.333765 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.333781 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.437054 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.437092 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.437102 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.437120 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.437132 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.540085 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.540179 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.540192 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.540219 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.540234 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.642589 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.642635 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.642645 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.642662 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.642673 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.745424 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.745501 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.745513 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.745533 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.745546 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.848028 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.848094 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.848105 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.848142 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.848155 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.950441 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.950495 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.950504 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.950523 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4806]: I1125 14:54:34.950532 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.053375 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.053408 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.053418 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.053436 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.053445 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.090213 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:54:35 crc kubenswrapper[4806]: E1125 14:54:35.090442 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.155933 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.156011 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.156023 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.156040 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.156053 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.258348 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.258401 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.258412 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.258425 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.258434 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.360957 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.360999 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.361010 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.361025 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.361035 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.463566 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.463605 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.463616 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.463636 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.463647 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.566398 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.566438 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.566448 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.566464 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.566477 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.668903 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.668942 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.668950 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.668964 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.668973 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.771076 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.771107 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.771115 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.771129 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.771137 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.873579 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.873832 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.873919 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.874045 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.874127 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.905498 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.905728 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.905859 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.905956 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.906063 4806 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.950193 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj"] Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.950640 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.953495 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.953856 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.953962 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.954069 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.967439 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-2mmdk" podStartSLOduration=86.967423327 podStartE2EDuration="1m26.967423327s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:35.967101448 +0000 UTC m=+108.619243859" watchObservedRunningTime="2025-11-25 14:54:35.967423327 +0000 UTC m=+108.619565738" Nov 25 14:54:35 crc kubenswrapper[4806]: I1125 14:54:35.992868 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=35.992848811 podStartE2EDuration="35.992848811s" podCreationTimestamp="2025-11-25 14:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:35.979293175 +0000 UTC m=+108.631435606" watchObservedRunningTime="2025-11-25 14:54:35.992848811 +0000 UTC m=+108.644991222" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.003918 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-mwdqt" podStartSLOduration=88.003899747 podStartE2EDuration="1m28.003899747s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:35.993408896 +0000 UTC m=+108.645551327" watchObservedRunningTime="2025-11-25 14:54:36.003899747 +0000 UTC m=+108.656042158" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.015787 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podStartSLOduration=88.015767425 podStartE2EDuration="1m28.015767425s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.003860476 +0000 UTC m=+108.656002887" watchObservedRunningTime="2025-11-25 14:54:36.015767425 +0000 UTC m=+108.667909837" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.045612 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zt8m9" podStartSLOduration=88.045592432 podStartE2EDuration="1m28.045592432s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.034017181 +0000 UTC m=+108.686159612" watchObservedRunningTime="2025-11-25 14:54:36.045592432 +0000 UTC m=+108.697734843" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.045889 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-6jcq2" podStartSLOduration=88.04588425 podStartE2EDuration="1m28.04588425s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.045840298 +0000 UTC m=+108.697982709" watchObservedRunningTime="2025-11-25 14:54:36.04588425 +0000 UTC m=+108.698026661" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.056031 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.056076 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c771f1-3331-401f-b936-841715b15f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.056277 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8c771f1-3331-401f-b936-841715b15f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.056411 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.056501 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c771f1-3331-401f-b936-841715b15f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.088585 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.088610 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.088633 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.088713 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:36 crc kubenswrapper[4806]: E1125 14:54:36.088814 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:36 crc kubenswrapper[4806]: E1125 14:54:36.088921 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:36 crc kubenswrapper[4806]: E1125 14:54:36.089028 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:36 crc kubenswrapper[4806]: E1125 14:54:36.089081 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.095104 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=11.095090012 podStartE2EDuration="11.095090012s" podCreationTimestamp="2025-11-25 14:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.094921058 +0000 UTC m=+108.747063499" watchObservedRunningTime="2025-11-25 14:54:36.095090012 +0000 UTC m=+108.747232413" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.105596 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.105578223 podStartE2EDuration="55.105578223s" podCreationTimestamp="2025-11-25 14:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.105369417 +0000 UTC m=+108.757511828" watchObservedRunningTime="2025-11-25 14:54:36.105578223 +0000 UTC m=+108.757720634" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.131139 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5lhpk" podStartSLOduration=88.13111981 podStartE2EDuration="1m28.13111981s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.13110114 +0000 UTC m=+108.783243561" watchObservedRunningTime="2025-11-25 14:54:36.13111981 +0000 UTC m=+108.783262211" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.146092 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=86.146074685 podStartE2EDuration="1m26.146074685s" podCreationTimestamp="2025-11-25 14:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.145678604 +0000 UTC m=+108.797821015" watchObservedRunningTime="2025-11-25 14:54:36.146074685 +0000 UTC m=+108.798217096" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157295 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c771f1-3331-401f-b936-841715b15f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157372 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157416 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c771f1-3331-401f-b936-841715b15f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157453 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8c771f1-3331-401f-b936-841715b15f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157501 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157537 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.157580 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c8c771f1-3331-401f-b936-841715b15f9e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.158617 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c8c771f1-3331-401f-b936-841715b15f9e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.163645 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8c771f1-3331-401f-b936-841715b15f9e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.181653 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c8c771f1-3331-401f-b936-841715b15f9e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8btnj\" (UID: \"c8c771f1-3331-401f-b936-841715b15f9e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.229411 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.229392632 podStartE2EDuration="1m29.229392632s" podCreationTimestamp="2025-11-25 14:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.216390432 +0000 UTC m=+108.868532863" watchObservedRunningTime="2025-11-25 14:54:36.229392632 +0000 UTC m=+108.881535043" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.264694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.603597 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" event={"ID":"c8c771f1-3331-401f-b936-841715b15f9e","Type":"ContainerStarted","Data":"fb3ef83325c5c67e4ce15a5c879657c899212728513d801185cc96ffddce2759"} Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.603755 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" event={"ID":"c8c771f1-3331-401f-b936-841715b15f9e","Type":"ContainerStarted","Data":"8f2de6c8840a17613a1037a72d0786c0b8ad4054495cacdba3f7336d5caa2189"} Nov 25 14:54:36 crc kubenswrapper[4806]: I1125 14:54:36.616272 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8btnj" podStartSLOduration=88.616250697 podStartE2EDuration="1m28.616250697s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:54:36.616143454 +0000 UTC m=+109.268285865" watchObservedRunningTime="2025-11-25 14:54:36.616250697 +0000 UTC m=+109.268393128" Nov 25 14:54:38 crc kubenswrapper[4806]: I1125 14:54:38.112623 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:38 crc kubenswrapper[4806]: I1125 14:54:38.112678 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:38 crc kubenswrapper[4806]: I1125 14:54:38.112709 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:38 crc kubenswrapper[4806]: I1125 14:54:38.112759 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:38 crc kubenswrapper[4806]: E1125 14:54:38.113906 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:38 crc kubenswrapper[4806]: E1125 14:54:38.113967 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:38 crc kubenswrapper[4806]: E1125 14:54:38.114074 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:38 crc kubenswrapper[4806]: E1125 14:54:38.114180 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:40 crc kubenswrapper[4806]: I1125 14:54:40.089276 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:40 crc kubenswrapper[4806]: E1125 14:54:40.089409 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:40 crc kubenswrapper[4806]: I1125 14:54:40.089490 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:40 crc kubenswrapper[4806]: I1125 14:54:40.089493 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:40 crc kubenswrapper[4806]: E1125 14:54:40.089688 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:40 crc kubenswrapper[4806]: E1125 14:54:40.089813 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:40 crc kubenswrapper[4806]: I1125 14:54:40.089548 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:40 crc kubenswrapper[4806]: E1125 14:54:40.089906 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:42 crc kubenswrapper[4806]: I1125 14:54:42.088610 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:42 crc kubenswrapper[4806]: I1125 14:54:42.088673 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:42 crc kubenswrapper[4806]: I1125 14:54:42.088658 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:42 crc kubenswrapper[4806]: I1125 14:54:42.088647 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:42 crc kubenswrapper[4806]: E1125 14:54:42.088787 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:42 crc kubenswrapper[4806]: E1125 14:54:42.088906 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:42 crc kubenswrapper[4806]: E1125 14:54:42.089003 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:42 crc kubenswrapper[4806]: E1125 14:54:42.089090 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.626033 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/1.log" Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.627362 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/0.log" Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.627426 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b7ddd20-62b7-4687-9982-83cf1cbac3ab" containerID="6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8" exitCode=1 Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.627469 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerDied","Data":"6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8"} Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.627586 4806 scope.go:117] "RemoveContainer" containerID="a0cb67ea6e13645bb6513fcd0d90197317fd39f29f25deddf11e1110572e1986" Nov 25 14:54:43 crc kubenswrapper[4806]: I1125 14:54:43.628073 4806 scope.go:117] "RemoveContainer" containerID="6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8" Nov 25 14:54:43 crc kubenswrapper[4806]: E1125 14:54:43.628250 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-mwdqt_openshift-multus(8b7ddd20-62b7-4687-9982-83cf1cbac3ab)\"" pod="openshift-multus/multus-mwdqt" podUID="8b7ddd20-62b7-4687-9982-83cf1cbac3ab" Nov 25 14:54:44 crc kubenswrapper[4806]: I1125 14:54:44.088282 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:44 crc kubenswrapper[4806]: I1125 14:54:44.088283 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:44 crc kubenswrapper[4806]: E1125 14:54:44.088837 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:44 crc kubenswrapper[4806]: I1125 14:54:44.088465 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:44 crc kubenswrapper[4806]: I1125 14:54:44.088400 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:44 crc kubenswrapper[4806]: E1125 14:54:44.088950 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:44 crc kubenswrapper[4806]: E1125 14:54:44.089111 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:44 crc kubenswrapper[4806]: E1125 14:54:44.089139 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:44 crc kubenswrapper[4806]: I1125 14:54:44.631395 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/1.log" Nov 25 14:54:46 crc kubenswrapper[4806]: I1125 14:54:46.088583 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:46 crc kubenswrapper[4806]: E1125 14:54:46.088938 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:46 crc kubenswrapper[4806]: I1125 14:54:46.088953 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:46 crc kubenswrapper[4806]: I1125 14:54:46.089005 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:46 crc kubenswrapper[4806]: I1125 14:54:46.089067 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:46 crc kubenswrapper[4806]: E1125 14:54:46.089212 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:46 crc kubenswrapper[4806]: E1125 14:54:46.089366 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:46 crc kubenswrapper[4806]: E1125 14:54:46.089424 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:46 crc kubenswrapper[4806]: I1125 14:54:46.090878 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:54:46 crc kubenswrapper[4806]: E1125 14:54:46.091249 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-69wls_openshift-ovn-kubernetes(0fff40d8-fd9f-49da-953f-89894b4ef3a1)\"" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" Nov 25 14:54:48 crc kubenswrapper[4806]: I1125 14:54:48.088363 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:48 crc kubenswrapper[4806]: I1125 14:54:48.088438 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:48 crc kubenswrapper[4806]: I1125 14:54:48.088520 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:48 crc kubenswrapper[4806]: I1125 14:54:48.088548 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.088762 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.088828 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.090010 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.090332 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.098756 4806 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 14:54:48 crc kubenswrapper[4806]: E1125 14:54:48.211759 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:54:50 crc kubenswrapper[4806]: I1125 14:54:50.089043 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:50 crc kubenswrapper[4806]: I1125 14:54:50.089212 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:50 crc kubenswrapper[4806]: I1125 14:54:50.089287 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:50 crc kubenswrapper[4806]: E1125 14:54:50.089260 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:50 crc kubenswrapper[4806]: E1125 14:54:50.090018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:50 crc kubenswrapper[4806]: I1125 14:54:50.090063 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:50 crc kubenswrapper[4806]: E1125 14:54:50.090117 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:50 crc kubenswrapper[4806]: E1125 14:54:50.090191 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:52 crc kubenswrapper[4806]: I1125 14:54:52.089413 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:52 crc kubenswrapper[4806]: I1125 14:54:52.089471 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:52 crc kubenswrapper[4806]: I1125 14:54:52.089422 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:52 crc kubenswrapper[4806]: I1125 14:54:52.089422 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:52 crc kubenswrapper[4806]: E1125 14:54:52.089858 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:52 crc kubenswrapper[4806]: E1125 14:54:52.089968 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:52 crc kubenswrapper[4806]: E1125 14:54:52.090073 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:52 crc kubenswrapper[4806]: E1125 14:54:52.090162 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:53 crc kubenswrapper[4806]: E1125 14:54:53.212603 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:54:54 crc kubenswrapper[4806]: I1125 14:54:54.088412 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:54 crc kubenswrapper[4806]: I1125 14:54:54.088444 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:54 crc kubenswrapper[4806]: E1125 14:54:54.088554 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:54 crc kubenswrapper[4806]: E1125 14:54:54.089071 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:54 crc kubenswrapper[4806]: I1125 14:54:54.089085 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:54 crc kubenswrapper[4806]: E1125 14:54:54.089224 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:54 crc kubenswrapper[4806]: I1125 14:54:54.089394 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:54 crc kubenswrapper[4806]: E1125 14:54:54.089568 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:56 crc kubenswrapper[4806]: I1125 14:54:56.092828 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:56 crc kubenswrapper[4806]: E1125 14:54:56.092974 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:56 crc kubenswrapper[4806]: I1125 14:54:56.093198 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:56 crc kubenswrapper[4806]: I1125 14:54:56.093235 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:56 crc kubenswrapper[4806]: E1125 14:54:56.093260 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:56 crc kubenswrapper[4806]: E1125 14:54:56.093382 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:56 crc kubenswrapper[4806]: I1125 14:54:56.093511 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:56 crc kubenswrapper[4806]: E1125 14:54:56.093589 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.088842 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.088892 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.088842 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.088864 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:58 crc kubenswrapper[4806]: E1125 14:54:58.090604 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:58 crc kubenswrapper[4806]: E1125 14:54:58.090688 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:58 crc kubenswrapper[4806]: E1125 14:54:58.090519 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:54:58 crc kubenswrapper[4806]: E1125 14:54:58.091060 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.091086 4806 scope.go:117] "RemoveContainer" containerID="6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8" Nov 25 14:54:58 crc kubenswrapper[4806]: E1125 14:54:58.213100 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.673648 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/1.log" Nov 25 14:54:58 crc kubenswrapper[4806]: I1125 14:54:58.673941 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerStarted","Data":"f102e481dfaccdfce5f39caa4beba0d09e366619cf92b1c1314ed49eea807f37"} Nov 25 14:55:00 crc kubenswrapper[4806]: I1125 14:55:00.088953 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:00 crc kubenswrapper[4806]: I1125 14:55:00.089065 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:00 crc kubenswrapper[4806]: E1125 14:55:00.089109 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:00 crc kubenswrapper[4806]: I1125 14:55:00.089082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:00 crc kubenswrapper[4806]: I1125 14:55:00.089203 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:00 crc kubenswrapper[4806]: E1125 14:55:00.089390 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:00 crc kubenswrapper[4806]: E1125 14:55:00.089448 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:00 crc kubenswrapper[4806]: E1125 14:55:00.089565 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:01 crc kubenswrapper[4806]: I1125 14:55:01.089351 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 14:55:01 crc kubenswrapper[4806]: I1125 14:55:01.687005 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/3.log" Nov 25 14:55:01 crc kubenswrapper[4806]: I1125 14:55:01.690048 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerStarted","Data":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 14:55:01 crc kubenswrapper[4806]: I1125 14:55:01.690708 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.089193 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:02 crc kubenswrapper[4806]: E1125 14:55:02.089655 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.089268 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:02 crc kubenswrapper[4806]: E1125 14:55:02.089733 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.089431 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:02 crc kubenswrapper[4806]: E1125 14:55:02.089794 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.089228 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:02 crc kubenswrapper[4806]: E1125 14:55:02.089868 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.101577 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podStartSLOduration=114.10154451 podStartE2EDuration="1m54.10154451s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:01.728655013 +0000 UTC m=+134.380797444" watchObservedRunningTime="2025-11-25 14:55:02.10154451 +0000 UTC m=+134.753686921" Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.103021 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lsrxh"] Nov 25 14:55:02 crc kubenswrapper[4806]: I1125 14:55:02.693374 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:02 crc kubenswrapper[4806]: E1125 14:55:02.693493 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:03 crc kubenswrapper[4806]: E1125 14:55:03.215183 4806 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:55:04 crc kubenswrapper[4806]: I1125 14:55:04.089491 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:04 crc kubenswrapper[4806]: I1125 14:55:04.089566 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:04 crc kubenswrapper[4806]: E1125 14:55:04.089619 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:04 crc kubenswrapper[4806]: I1125 14:55:04.089633 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:04 crc kubenswrapper[4806]: I1125 14:55:04.089679 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:04 crc kubenswrapper[4806]: E1125 14:55:04.089810 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:04 crc kubenswrapper[4806]: E1125 14:55:04.090091 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:04 crc kubenswrapper[4806]: E1125 14:55:04.090159 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:06 crc kubenswrapper[4806]: I1125 14:55:06.088621 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:06 crc kubenswrapper[4806]: I1125 14:55:06.088736 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:06 crc kubenswrapper[4806]: E1125 14:55:06.088790 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:06 crc kubenswrapper[4806]: E1125 14:55:06.088897 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:06 crc kubenswrapper[4806]: I1125 14:55:06.088968 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:06 crc kubenswrapper[4806]: I1125 14:55:06.088979 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:06 crc kubenswrapper[4806]: E1125 14:55:06.089018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:06 crc kubenswrapper[4806]: E1125 14:55:06.089068 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:08 crc kubenswrapper[4806]: I1125 14:55:08.089136 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:08 crc kubenswrapper[4806]: I1125 14:55:08.089237 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:08 crc kubenswrapper[4806]: I1125 14:55:08.089851 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:08 crc kubenswrapper[4806]: E1125 14:55:08.090119 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:08 crc kubenswrapper[4806]: I1125 14:55:08.090135 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:08 crc kubenswrapper[4806]: E1125 14:55:08.090238 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:08 crc kubenswrapper[4806]: E1125 14:55:08.090353 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:08 crc kubenswrapper[4806]: E1125 14:55:08.090482 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lsrxh" podUID="49e22ad0-2903-4ed0-94ad-40d713f99c9f" Nov 25 14:55:09 crc kubenswrapper[4806]: I1125 14:55:09.200005 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.088697 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.088717 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.088696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.088697 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.092273 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.092277 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.092562 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.092606 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.092607 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 14:55:10 crc kubenswrapper[4806]: I1125 14:55:10.094371 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.777945 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.791631 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.879477 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:14 crc kubenswrapper[4806]: E1125 14:55:14.879611 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:57:16.87958391 +0000 UTC m=+269.531726331 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.879656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.879696 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.879744 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.881369 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.884187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.891931 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.903666 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.910715 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4806]: I1125 14:55:14.924697 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:15 crc kubenswrapper[4806]: W1125 14:55:15.213626 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-c9700b6f0da8d1bcbb1ec636484a8f243ba5d1517d6195c47ca763075c4f1287 WatchSource:0}: Error finding container c9700b6f0da8d1bcbb1ec636484a8f243ba5d1517d6195c47ca763075c4f1287: Status 404 returned error can't find the container with id c9700b6f0da8d1bcbb1ec636484a8f243ba5d1517d6195c47ca763075c4f1287 Nov 25 14:55:15 crc kubenswrapper[4806]: W1125 14:55:15.354798 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-3dd2c403e7d24c51df24579963b0c17cbc78a22c1e2b5aceb04822acf0758487 WatchSource:0}: Error finding container 3dd2c403e7d24c51df24579963b0c17cbc78a22c1e2b5aceb04822acf0758487: Status 404 returned error can't find the container with id 3dd2c403e7d24c51df24579963b0c17cbc78a22c1e2b5aceb04822acf0758487 Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.738599 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"041163a730c423a8b4bcdaf5f64e21d515e7b16915193d4026495d33129140f6"} Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.738698 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c9700b6f0da8d1bcbb1ec636484a8f243ba5d1517d6195c47ca763075c4f1287"} Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.739977 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"326ab3d40a6552631e473eb7676ad5ed7785ded91366d49cc1acb2416eb459ba"} Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.740010 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f2d9ac3d050f8b98a3cf20bcdafa690b33cd0ba2d33f273ea94a5b020e015cc4"} Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.740187 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.741287 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"91f2e5ecf16235462ff42d138ab0ac0c900bf05bdb1622dfa56a9804e654b35c"} Nov 25 14:55:15 crc kubenswrapper[4806]: I1125 14:55:15.741393 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3dd2c403e7d24c51df24579963b0c17cbc78a22c1e2b5aceb04822acf0758487"} Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.507640 4806 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.544251 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.544696 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.549756 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.549914 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.549922 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.550280 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.551527 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xklng"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.551896 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.563154 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.564003 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.564622 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.564713 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.565139 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.569442 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.569442 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.569983 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.570646 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.570812 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571081 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571377 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571536 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p957m"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571607 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571798 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.572181 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571858 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571860 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.571958 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.581648 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.581882 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.581932 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6w68"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.581651 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.582017 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.582725 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xx6dj"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.583029 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.583125 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.583904 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.584095 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.584215 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4c9r4"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.584386 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.584688 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.586670 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.587345 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.587485 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.588509 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.591414 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.591538 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.592013 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.592117 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.592242 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.592467 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.594697 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.595101 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.595680 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.596023 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.596540 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9tjs2"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.596891 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598839 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307ebecf-190d-447f-ac14-28516ef87e6a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598871 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598896 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fd1be-42ae-4954-99f6-14807b522398-serving-cert\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598919 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx92b\" (UniqueName: \"kubernetes.io/projected/3a93da81-98cb-4a53-9c02-60cc144ebf9d-kube-api-access-fx92b\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598945 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598965 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2be4e761-7ffb-42b6-8656-8f591d749624-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.598986 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-image-import-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqh7\" (UniqueName: \"kubernetes.io/projected/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-kube-api-access-2dqh7\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599050 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599137 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599053 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599565 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599584 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307ebecf-190d-447f-ac14-28516ef87e6a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599602 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599619 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-serving-cert\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg8p4\" (UniqueName: \"kubernetes.io/projected/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-kube-api-access-sg8p4\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599652 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit-dir\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599668 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be0fd1be-42ae-4954-99f6-14807b522398-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599695 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skwfs\" (UniqueName: \"kubernetes.io/projected/be0fd1be-42ae-4954-99f6-14807b522398-kube-api-access-skwfs\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599700 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599708 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599723 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-client\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599758 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599775 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299jk\" (UniqueName: \"kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599793 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599808 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-encryption-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599826 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-node-pullsecrets\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599844 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-metrics-tls\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599870 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzmw\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-kube-api-access-mkzmw\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599888 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599904 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599939 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599957 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599974 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.599988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600007 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qbxw\" (UniqueName: \"kubernetes.io/projected/307ebecf-190d-447f-ac14-28516ef87e6a-kube-api-access-7qbxw\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600022 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600042 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2be4e761-7ffb-42b6-8656-8f591d749624-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600159 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600274 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600358 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.600944 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.601157 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.601466 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.601688 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.601804 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.601871 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.602045 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.602513 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.605884 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.615506 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zf4ph"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.616113 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.617002 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.617505 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.617913 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.622441 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.622504 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.622807 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.623254 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.623638 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.623808 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.623866 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.623912 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624036 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624153 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624217 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624612 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624661 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624761 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624833 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624865 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624904 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625014 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625060 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625141 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625326 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625421 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625551 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625655 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625676 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625838 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625853 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625937 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.625990 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626069 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626145 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626212 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624780 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.624786 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626227 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626274 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626409 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626010 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626451 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626724 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626730 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626869 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626991 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.627007 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.626899 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.627692 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.628280 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.630386 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.630738 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.631447 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632041 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632074 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632128 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632457 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632603 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632684 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.632841 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.633534 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.633967 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.634095 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.634568 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.634786 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.635198 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.644718 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.691399 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.692037 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.692226 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.693194 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.694066 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.694836 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.694985 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.695087 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.695469 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.695684 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.695819 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.696503 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.697009 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.697506 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.697848 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kfst9"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.698304 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.700162 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.700571 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.700878 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jcqc"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.701593 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.703046 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.703423 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704680 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704718 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704740 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307ebecf-190d-447f-ac14-28516ef87e6a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704760 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704776 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fd1be-42ae-4954-99f6-14807b522398-serving-cert\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704793 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704811 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx92b\" (UniqueName: \"kubernetes.io/projected/3a93da81-98cb-4a53-9c02-60cc144ebf9d-kube-api-access-fx92b\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704829 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2be4e761-7ffb-42b6-8656-8f591d749624-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dqh7\" (UniqueName: \"kubernetes.io/projected/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-kube-api-access-2dqh7\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704880 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-image-import-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704911 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704925 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307ebecf-190d-447f-ac14-28516ef87e6a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704940 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704955 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-serving-cert\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be0fd1be-42ae-4954-99f6-14807b522398-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.704986 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg8p4\" (UniqueName: \"kubernetes.io/projected/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-kube-api-access-sg8p4\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit-dir\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skwfs\" (UniqueName: \"kubernetes.io/projected/be0fd1be-42ae-4954-99f6-14807b522398-kube-api-access-skwfs\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705040 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705056 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705071 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705087 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-client\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705104 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299jk\" (UniqueName: \"kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705123 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705138 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-encryption-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-metrics-tls\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705169 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-node-pullsecrets\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705185 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzmw\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-kube-api-access-mkzmw\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705222 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705239 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705256 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705272 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705291 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705308 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705342 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qbxw\" (UniqueName: \"kubernetes.io/projected/307ebecf-190d-447f-ac14-28516ef87e6a-kube-api-access-7qbxw\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.705384 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2be4e761-7ffb-42b6-8656-8f591d749624-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.706462 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2be4e761-7ffb-42b6-8656-8f591d749624-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.708284 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.709100 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.711629 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.712195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/307ebecf-190d-447f-ac14-28516ef87e6a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.712442 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.712607 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.713293 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.713501 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.713789 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/be0fd1be-42ae-4954-99f6-14807b522398-available-featuregates\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.714168 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.714193 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-serving-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.715083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-image-import-ca\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.715564 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.715652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.716098 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.716110 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.716239 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit-dir\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.717131 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.718002 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.718696 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-audit\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.719567 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/2be4e761-7ffb-42b6-8656-8f591d749624-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.719808 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be0fd1be-42ae-4954-99f6-14807b522398-serving-cert\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.720246 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.722615 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-encryption-config\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.722885 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.726509 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-etcd-client\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.727937 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.728774 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a93da81-98cb-4a53-9c02-60cc144ebf9d-trusted-ca-bundle\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.732561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-metrics-tls\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.732722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3a93da81-98cb-4a53-9c02-60cc144ebf9d-node-pullsecrets\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.732883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.733507 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.733493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.733955 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.734187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/307ebecf-190d-447f-ac14-28516ef87e6a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.735522 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.735650 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.736073 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.736744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.737445 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.737811 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.737916 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.738494 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.739487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.740365 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.740904 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.742170 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgjgk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.742954 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.743287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3a93da81-98cb-4a53-9c02-60cc144ebf9d-serving-cert\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.745732 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.751867 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.758100 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.761441 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.764922 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.765425 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xklng"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.766075 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.768531 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.768642 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.770152 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.771156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4c9r4"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.772179 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.773473 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p957m"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.774819 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9tjs2"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.776585 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xx6dj"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.776663 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zf4ph"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.784873 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-cszqz"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.786496 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.786529 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jcqc"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.786630 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.788575 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.790809 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.793156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.796356 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.798438 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.799574 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.801486 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.802972 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.803812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.805343 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.808038 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.810277 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.812436 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.812720 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.814344 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.816033 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgjgk"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.817437 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6w68"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.818920 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.820047 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.821306 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.822571 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x92cw"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.823557 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.824007 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.825592 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.826811 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.827063 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.828509 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cszqz"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.829982 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.831285 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x92cw"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.832467 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8t729"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.833200 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.834002 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jw49k"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.834879 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.835157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8t729"] Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.847215 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.866206 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.885578 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.906851 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.926171 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.946073 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.966899 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 14:55:16 crc kubenswrapper[4806]: I1125 14:55:16.987862 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.006893 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.027920 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.046990 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.066206 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.107224 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108511 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108548 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108584 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108601 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-config\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108627 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108650 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcmmp\" (UniqueName: \"kubernetes.io/projected/923b096b-4da2-4e3e-8c86-b3715c249ac0-kube-api-access-qcmmp\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108668 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v5pb\" (UniqueName: \"kubernetes.io/projected/f9b1a29e-c5b3-45fd-9082-b46293956184-kube-api-access-7v5pb\") pod \"downloads-7954f5f757-xx6dj\" (UID: \"f9b1a29e-c5b3-45fd-9082-b46293956184\") " pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108687 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108701 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/923b096b-4da2-4e3e-8c86-b3715c249ac0-serving-cert\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108715 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fbfae-81cd-4b3a-a2ef-771ca4884793-serving-cert\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108729 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcpp\" (UniqueName: \"kubernetes.io/projected/a81fbfae-81cd-4b3a-a2ef-771ca4884793-kube-api-access-fmcpp\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108748 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-trusted-ca\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108764 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108779 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108804 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108818 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-config\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108834 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.108883 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l6n4\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.109138 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.60912504 +0000 UTC m=+150.261267451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.126244 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.147504 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.166781 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.186486 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.206867 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.209344 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.209484 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.709464272 +0000 UTC m=+150.361606683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.209912 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcmmp\" (UniqueName: \"kubernetes.io/projected/923b096b-4da2-4e3e-8c86-b3715c249ac0-kube-api-access-qcmmp\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.210052 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v5pb\" (UniqueName: \"kubernetes.io/projected/f9b1a29e-c5b3-45fd-9082-b46293956184-kube-api-access-7v5pb\") pod \"downloads-7954f5f757-xx6dj\" (UID: \"f9b1a29e-c5b3-45fd-9082-b46293956184\") " pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.210172 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fbfae-81cd-4b3a-a2ef-771ca4884793-serving-cert\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.210288 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.210909 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-trusted-ca\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.212219 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-trusted-ca\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.212400 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.213119 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-socket-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.213466 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.213762 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt864\" (UniqueName: \"kubernetes.io/projected/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-kube-api-access-jt864\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.213878 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-config\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.213062 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a81fbfae-81cd-4b3a-a2ef-771ca4884793-serving-cert\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.214531 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a81fbfae-81cd-4b3a-a2ef-771ca4884793-config\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.214772 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.214581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277xv\" (UniqueName: \"kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.215033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.215469 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl8r\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-kube-api-access-9sl8r\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.215609 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16a8fa04-87f4-46fa-a310-aa62275684c0-proxy-tls\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.215753 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.215884 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-encryption-config\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216014 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b5ca54-68e2-4db9-84fa-a77e3f61735e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216124 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx4mf\" (UniqueName: \"kubernetes.io/projected/7f909c09-273f-48a4-8ef1-eb80eb473c5e-kube-api-access-hx4mf\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216778 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-service-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216902 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk8qz\" (UniqueName: \"kubernetes.io/projected/76a76f7a-7f38-4aac-8a57-a60f332306cb-kube-api-access-kk8qz\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.216736 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.217185 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.217322 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq74x\" (UniqueName: \"kubernetes.io/projected/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-kube-api-access-qq74x\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.217454 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f3d083b-5922-4da3-ad9e-e5f323836cba-metrics-tls\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.217797 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.217898 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aa34022-429c-4bba-91a8-229a7b634a50-machine-approver-tls\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218000 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l6n4\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218107 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-images\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218212 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4sh9\" (UniqueName: \"kubernetes.io/projected/3ad5dac9-54d3-4435-8f38-77e91d1965e0-kube-api-access-n4sh9\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218341 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczsc\" (UniqueName: \"kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218431 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218511 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b5ca54-68e2-4db9-84fa-a77e3f61735e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218665 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjhv\" (UniqueName: \"kubernetes.io/projected/0aa34022-429c-4bba-91a8-229a7b634a50-kube-api-access-bgjhv\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218756 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218786 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5762\" (UniqueName: \"kubernetes.io/projected/17ede0a7-8694-488d-822c-47e76211a19f-kube-api-access-b5762\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-serving-cert\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218853 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpf7\" (UniqueName: \"kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218871 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-images\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.218915 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrj8h\" (UniqueName: \"kubernetes.io/projected/97b5ca54-68e2-4db9-84fa-a77e3f61735e-kube-api-access-xrj8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.219087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.219168 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68hf2\" (UniqueName: \"kubernetes.io/projected/72d314ec-8059-4f5b-b4b7-91372748623e-kube-api-access-68hf2\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.219255 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-client\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.219752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-auth-proxy-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.219848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-webhook-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220209 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/024b2329-b8db-400c-bbaa-f77ba9a3bdae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220289 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-config\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220337 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220384 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-config\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220408 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-default-certificate\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220432 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-metrics-certs\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220494 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220559 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220608 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220656 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220721 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f394b01a-b495-4acf-bca9-0b23347a3358-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220746 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fbdcab-7837-4273-8aaa-70b4e1667988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220769 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724h5\" (UniqueName: \"kubernetes.io/projected/ce6c946f-c804-4b57-bc37-8169c677e231-kube-api-access-724h5\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220792 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fz94\" (UniqueName: \"kubernetes.io/projected/7142eedd-c71b-4c92-97a8-def92a981529-kube-api-access-5fz94\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220836 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220858 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-mountpoint-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220878 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fe464df-b275-4f86-8750-6052a803b024-tmpfs\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220899 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6zsz\" (UniqueName: \"kubernetes.io/projected/2e32043e-a11b-473b-b42a-ecc01450a942-kube-api-access-q6zsz\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220927 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220948 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-config\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220970 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.220993 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-csi-data-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/024b2329-b8db-400c-bbaa-f77ba9a3bdae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221042 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-srv-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221264 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-apiservice-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221391 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-audit-policies\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221479 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-etcd-client\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221533 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-stats-auth\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-config\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221765 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221937 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.221973 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7w6\" (UniqueName: \"kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222004 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e32043e-a11b-473b-b42a-ecc01450a942-proxy-tls\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222028 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-config\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222051 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222096 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/923b096b-4da2-4e3e-8c86-b3715c249ac0-serving-cert\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222145 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6b7\" (UniqueName: \"kubernetes.io/projected/1f1e0355-7806-4025-88f6-992756ffbe86-kube-api-access-ld6b7\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222170 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222218 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222237 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222277 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcpp\" (UniqueName: \"kubernetes.io/projected/a81fbfae-81cd-4b3a-a2ef-771ca4884793-kube-api-access-fmcpp\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222341 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222389 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40730d61-24e2-4810-89f7-0a34fe204440-audit-dir\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222411 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222453 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024b2329-b8db-400c-bbaa-f77ba9a3bdae-config\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222477 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fbdcab-7837-4273-8aaa-70b4e1667988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222503 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnlbk\" (UniqueName: \"kubernetes.io/projected/16a8fa04-87f4-46fa-a310-aa62275684c0-kube-api-access-xnlbk\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222526 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222549 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222620 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222646 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2pl\" (UniqueName: \"kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222684 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad5dac9-54d3-4435-8f38-77e91d1965e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222709 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41fbdcab-7837-4273-8aaa-70b4e1667988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222749 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wjr\" (UniqueName: \"kubernetes.io/projected/f394b01a-b495-4acf-bca9-0b23347a3358-kube-api-access-k4wjr\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222772 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.222795 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qnq\" (UniqueName: \"kubernetes.io/projected/40730d61-24e2-4810-89f7-0a34fe204440-kube-api-access-b4qnq\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpjrb\" (UniqueName: \"kubernetes.io/projected/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-kube-api-access-fpjrb\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223131 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f3d083b-5922-4da3-ad9e-e5f323836cba-trusted-ca\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223462 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675rv\" (UniqueName: \"kubernetes.io/projected/1531828a-4e80-4d77-92c0-99e9ae888fae-kube-api-access-675rv\") pod \"migrator-59844c95c7-2nrmh\" (UID: \"1531828a-4e80-4d77-92c0-99e9ae888fae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223500 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223531 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223765 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-service-ca-bundle\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223851 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7142eedd-c71b-4c92-97a8-def92a981529-config\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.223921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kg5m\" (UniqueName: \"kubernetes.io/projected/2fe464df-b275-4f86-8750-6052a803b024-kube-api-access-2kg5m\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224036 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-registration-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79zpf\" (UniqueName: \"kubernetes.io/projected/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-kube-api-access-79zpf\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224088 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd7x8\" (UniqueName: \"kubernetes.io/projected/d58b6685-ca1a-4f73-a821-f5c4c37264ec-kube-api-access-hd7x8\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224119 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16a8fa04-87f4-46fa-a310-aa62275684c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224157 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224186 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-plugins-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224209 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfch\" (UniqueName: \"kubernetes.io/projected/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-kube-api-access-5rfch\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224253 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7142eedd-c71b-4c92-97a8-def92a981529-serving-cert\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224274 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-srv-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224331 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224367 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224391 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224425 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-serving-cert\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224448 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.224965 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-service-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.226217 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/923b096b-4da2-4e3e-8c86-b3715c249ac0-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.226958 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.227194 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.727178642 +0000 UTC m=+150.379321143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.227817 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.228922 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.230105 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/923b096b-4da2-4e3e-8c86-b3715c249ac0-serving-cert\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.245960 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.249791 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.267535 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.307015 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.325660 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.325779 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.825753593 +0000 UTC m=+150.477896014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.325891 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-plugins-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.325927 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.325958 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfch\" (UniqueName: \"kubernetes.io/projected/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-kube-api-access-5rfch\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.325978 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7142eedd-c71b-4c92-97a8-def92a981529-serving-cert\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-srv-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326027 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326050 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326088 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-serving-cert\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326106 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326150 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326174 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326196 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-socket-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326220 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt864\" (UniqueName: \"kubernetes.io/projected/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-kube-api-access-jt864\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326240 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-277xv\" (UniqueName: \"kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16a8fa04-87f4-46fa-a310-aa62275684c0-proxy-tls\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-socket-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326175 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-plugins-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326484 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-encryption-config\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326540 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326563 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl8r\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-kube-api-access-9sl8r\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.326601 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.826583577 +0000 UTC m=+150.478725988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326812 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b5ca54-68e2-4db9-84fa-a77e3f61735e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx4mf\" (UniqueName: \"kubernetes.io/projected/7f909c09-273f-48a4-8ef1-eb80eb473c5e-kube-api-access-hx4mf\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-service-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326928 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk8qz\" (UniqueName: \"kubernetes.io/projected/76a76f7a-7f38-4aac-8a57-a60f332306cb-kube-api-access-kk8qz\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.326992 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq74x\" (UniqueName: \"kubernetes.io/projected/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-kube-api-access-qq74x\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327040 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f3d083b-5922-4da3-ad9e-e5f323836cba-metrics-tls\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327059 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aa34022-429c-4bba-91a8-229a7b634a50-machine-approver-tls\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327105 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4sh9\" (UniqueName: \"kubernetes.io/projected/3ad5dac9-54d3-4435-8f38-77e91d1965e0-kube-api-access-n4sh9\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327132 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczsc\" (UniqueName: \"kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-images\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327185 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjhv\" (UniqueName: \"kubernetes.io/projected/0aa34022-429c-4bba-91a8-229a7b634a50-kube-api-access-bgjhv\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327223 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b5ca54-68e2-4db9-84fa-a77e3f61735e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327249 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327268 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5762\" (UniqueName: \"kubernetes.io/projected/17ede0a7-8694-488d-822c-47e76211a19f-kube-api-access-b5762\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327286 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-serving-cert\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwpf7\" (UniqueName: \"kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-images\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327367 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrj8h\" (UniqueName: \"kubernetes.io/projected/97b5ca54-68e2-4db9-84fa-a77e3f61735e-kube-api-access-xrj8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327411 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68hf2\" (UniqueName: \"kubernetes.io/projected/72d314ec-8059-4f5b-b4b7-91372748623e-kube-api-access-68hf2\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327430 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-client\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-auth-proxy-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-config\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327477 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-default-certificate\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327491 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-metrics-certs\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327511 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-webhook-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327528 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/024b2329-b8db-400c-bbaa-f77ba9a3bdae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327547 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-config\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327562 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327577 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327626 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327642 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724h5\" (UniqueName: \"kubernetes.io/projected/ce6c946f-c804-4b57-bc37-8169c677e231-kube-api-access-724h5\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fz94\" (UniqueName: \"kubernetes.io/projected/7142eedd-c71b-4c92-97a8-def92a981529-kube-api-access-5fz94\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327674 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327686 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b5ca54-68e2-4db9-84fa-a77e3f61735e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327693 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f394b01a-b495-4acf-bca9-0b23347a3358-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327743 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fbdcab-7837-4273-8aaa-70b4e1667988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327788 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-mountpoint-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327810 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fe464df-b275-4f86-8750-6052a803b024-tmpfs\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327834 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6zsz\" (UniqueName: \"kubernetes.io/projected/2e32043e-a11b-473b-b42a-ecc01450a942-kube-api-access-q6zsz\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327856 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-csi-data-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327878 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/024b2329-b8db-400c-bbaa-f77ba9a3bdae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327902 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-srv-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327929 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.327954 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-apiservice-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328023 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-audit-policies\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328048 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-etcd-client\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328068 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-stats-auth\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328138 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328160 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr7w6\" (UniqueName: \"kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e32043e-a11b-473b-b42a-ecc01450a942-proxy-tls\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-config\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328268 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328306 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328500 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328578 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld6b7\" (UniqueName: \"kubernetes.io/projected/1f1e0355-7806-4025-88f6-992756ffbe86-kube-api-access-ld6b7\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328633 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328667 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40730d61-24e2-4810-89f7-0a34fe204440-audit-dir\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328737 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328780 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024b2329-b8db-400c-bbaa-f77ba9a3bdae-config\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fbdcab-7837-4273-8aaa-70b4e1667988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328855 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnlbk\" (UniqueName: \"kubernetes.io/projected/16a8fa04-87f4-46fa-a310-aa62275684c0-kube-api-access-xnlbk\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.328906 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2pl\" (UniqueName: \"kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329087 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad5dac9-54d3-4435-8f38-77e91d1965e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329111 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41fbdcab-7837-4273-8aaa-70b4e1667988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4qnq\" (UniqueName: \"kubernetes.io/projected/40730d61-24e2-4810-89f7-0a34fe204440-kube-api-access-b4qnq\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329180 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4wjr\" (UniqueName: \"kubernetes.io/projected/f394b01a-b495-4acf-bca9-0b23347a3358-kube-api-access-k4wjr\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329243 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpjrb\" (UniqueName: \"kubernetes.io/projected/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-kube-api-access-fpjrb\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329325 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f3d083b-5922-4da3-ad9e-e5f323836cba-trusted-ca\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329362 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-675rv\" (UniqueName: \"kubernetes.io/projected/1531828a-4e80-4d77-92c0-99e9ae888fae-kube-api-access-675rv\") pod \"migrator-59844c95c7-2nrmh\" (UID: \"1531828a-4e80-4d77-92c0-99e9ae888fae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329505 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329538 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329559 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-service-ca-bundle\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329578 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-encryption-config\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7142eedd-c71b-4c92-97a8-def92a981529-config\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329640 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kg5m\" (UniqueName: \"kubernetes.io/projected/2fe464df-b275-4f86-8750-6052a803b024-kube-api-access-2kg5m\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329662 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-registration-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329681 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79zpf\" (UniqueName: \"kubernetes.io/projected/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-kube-api-access-79zpf\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329702 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd7x8\" (UniqueName: \"kubernetes.io/projected/d58b6685-ca1a-4f73-a821-f5c4c37264ec-kube-api-access-hd7x8\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329730 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16a8fa04-87f4-46fa-a310-aa62275684c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.329753 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.330836 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-registration-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.330865 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-service-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.331564 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.331817 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332001 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41fbdcab-7837-4273-8aaa-70b4e1667988-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f394b01a-b495-4acf-bca9-0b23347a3358-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-auth-proxy-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332541 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-mountpoint-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332673 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/40730d61-24e2-4810-89f7-0a34fe204440-audit-policies\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.332834 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-csi-data-dir\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.333123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-config\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.333615 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.333812 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.335207 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.335633 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3f3d083b-5922-4da3-ad9e-e5f323836cba-trusted-ca\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.336452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0aa34022-429c-4bba-91a8-229a7b634a50-config\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.336745 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/40730d61-24e2-4810-89f7-0a34fe204440-audit-dir\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.336802 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-config\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.336928 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0aa34022-429c-4bba-91a8-229a7b634a50-machine-approver-tls\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337049 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337393 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337641 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-config\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/16a8fa04-87f4-46fa-a310-aa62275684c0-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337813 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.337816 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-ca\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.338196 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/2fe464df-b275-4f86-8750-6052a803b024-tmpfs\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.338403 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.338561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-serving-cert\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.338710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f394b01a-b495-4acf-bca9-0b23347a3358-images\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339038 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339239 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339396 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-etcd-client\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.339819 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.340221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.340484 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/40730d61-24e2-4810-89f7-0a34fe204440-etcd-client\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.341057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76a76f7a-7f38-4aac-8a57-a60f332306cb-serving-cert\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.341507 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41fbdcab-7837-4273-8aaa-70b4e1667988-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.341882 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3f3d083b-5922-4da3-ad9e-e5f323836cba-metrics-tls\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.342095 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3ad5dac9-54d3-4435-8f38-77e91d1965e0-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.349533 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.360165 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b5ca54-68e2-4db9-84fa-a77e3f61735e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.367306 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.386498 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.406923 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.410370 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/024b2329-b8db-400c-bbaa-f77ba9a3bdae-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.427295 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.431865 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.432076 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.93204851 +0000 UTC m=+150.584190911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.432713 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.433158 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:17.933143643 +0000 UTC m=+150.585286044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.446202 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.466551 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.477099 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/024b2329-b8db-400c-bbaa-f77ba9a3bdae-config\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.491294 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.506558 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.527260 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.533679 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.534028 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.034006141 +0000 UTC m=+150.686148552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.534519 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.534858 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.034850416 +0000 UTC m=+150.686992827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.546708 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.555874 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-stats-auth\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.566966 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.580662 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-metrics-certs\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.586394 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.607934 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.620242 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-default-certificate\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.626585 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.635755 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.635926 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-service-ca-bundle\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.636206 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.136172227 +0000 UTC m=+150.788314638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.646548 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.666659 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.687145 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.704405 4806 request.go:700] Waited for 1.003612055s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&limit=500&resourceVersion=0 Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.706559 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.726891 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.737773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.738191 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.238171289 +0000 UTC m=+150.890313700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.740984 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-srv-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.745695 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.749769 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.757850 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-profile-collector-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.758743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/17ede0a7-8694-488d-822c-47e76211a19f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.766562 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.786694 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.795482 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.806749 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.827381 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.839303 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.839439 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.339415318 +0000 UTC m=+150.991557739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.839998 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.840378 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.340365796 +0000 UTC m=+150.992508217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.847201 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.866431 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.879473 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7142eedd-c71b-4c92-97a8-def92a981529-serving-cert\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.886245 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.907097 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.910385 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7142eedd-c71b-4c92-97a8-def92a981529-config\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.927004 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.934090 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2e32043e-a11b-473b-b42a-ecc01450a942-images\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.940974 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.941143 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.44110429 +0000 UTC m=+151.093246701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.941361 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:17 crc kubenswrapper[4806]: E1125 14:55:17.941868 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.441857283 +0000 UTC m=+151.093999694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.946124 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.967003 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 14:55:17 crc kubenswrapper[4806]: I1125 14:55:17.981127 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2e32043e-a11b-473b-b42a-ecc01450a942-proxy-tls\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.001400 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dqh7\" (UniqueName: \"kubernetes.io/projected/4bb1d689-2d28-457a-9c48-0b21c3ac56b2-kube-api-access-2dqh7\") pod \"dns-operator-744455d44c-4c9r4\" (UID: \"4bb1d689-2d28-457a-9c48-0b21c3ac56b2\") " pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.012733 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.026219 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.026791 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.032333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-srv-cert\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.042193 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.042682 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.542633278 +0000 UTC m=+151.194775689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.069761 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg8p4\" (UniqueName: \"kubernetes.io/projected/f49c7a82-aef3-47bf-a1bd-8b443b98be2d-kube-api-access-sg8p4\") pod \"openshift-apiserver-operator-796bbdcf4f-gjhkx\" (UID: \"f49c7a82-aef3-47bf-a1bd-8b443b98be2d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.071581 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.083395 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/16a8fa04-87f4-46fa-a310-aa62275684c0-proxy-tls\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.088506 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.142859 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx92b\" (UniqueName: \"kubernetes.io/projected/3a93da81-98cb-4a53-9c02-60cc144ebf9d-kube-api-access-fx92b\") pod \"apiserver-76f77b778f-g6w68\" (UID: \"3a93da81-98cb-4a53-9c02-60cc144ebf9d\") " pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.143534 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skwfs\" (UniqueName: \"kubernetes.io/projected/be0fd1be-42ae-4954-99f6-14807b522398-kube-api-access-skwfs\") pod \"openshift-config-operator-7777fb866f-hcfmr\" (UID: \"be0fd1be-42ae-4954-99f6-14807b522398\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.144540 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.144898 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.644878247 +0000 UTC m=+151.297020658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.167904 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299jk\" (UniqueName: \"kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk\") pod \"oauth-openshift-558db77b4-bn2sz\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.183266 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qbxw\" (UniqueName: \"kubernetes.io/projected/307ebecf-190d-447f-ac14-28516ef87e6a-kube-api-access-7qbxw\") pod \"openshift-controller-manager-operator-756b6f6bc6-zfhjl\" (UID: \"307ebecf-190d-447f-ac14-28516ef87e6a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.200350 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-4c9r4"] Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.203546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzmw\" (UniqueName: \"kubernetes.io/projected/2be4e761-7ffb-42b6-8656-8f591d749624-kube-api-access-mkzmw\") pod \"cluster-image-registry-operator-dc59b4c8b-trxgq\" (UID: \"2be4e761-7ffb-42b6-8656-8f591d749624\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.207136 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 14:55:18 crc kubenswrapper[4806]: W1125 14:55:18.208537 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bb1d689_2d28_457a_9c48_0b21c3ac56b2.slice/crio-0e9d16718feaa4f8f669bb7887b5e404d5c8aa4a381a099443849f52b2b1f104 WatchSource:0}: Error finding container 0e9d16718feaa4f8f669bb7887b5e404d5c8aa4a381a099443849f52b2b1f104: Status 404 returned error can't find the container with id 0e9d16718feaa4f8f669bb7887b5e404d5c8aa4a381a099443849f52b2b1f104 Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.212834 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.226691 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.237639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.245998 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.246179 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.246389 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.746361523 +0000 UTC m=+151.398503934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.246624 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.246982 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.246990 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.746982282 +0000 UTC m=+151.399124773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.259842 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.267917 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.286668 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.292859 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.298919 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-apiservice-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.299096 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2fe464df-b275-4f86-8750-6052a803b024-webhook-cert\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.300838 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339808 4806 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339869 4806 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339959 4806 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339880 4806 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339968 4806 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339972 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca podName:c14a961b-4eb5-4a10-abe7-bdd5ddff30bc nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.839941088 +0000 UTC m=+151.492083499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca") pod "marketplace-operator-79b997595-gm728" (UID: "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340099 4806 secret.go:188] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340119 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key podName:72d314ec-8059-4f5b-b4b7-91372748623e nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840096303 +0000 UTC m=+151.492238714 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key") pod "service-ca-9c57cc56f-lgjgk" (UID: "72d314ec-8059-4f5b-b4b7-91372748623e") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340122 4806 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339915 4806 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339970 4806 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.339833 4806 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340138 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume podName:1f1e0355-7806-4025-88f6-992756ffbe86 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840131224 +0000 UTC m=+151.492273635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume") pod "dns-default-cszqz" (UID: "1f1e0355-7806-4025-88f6-992756ffbe86") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340506 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert podName:d58b6685-ca1a-4f73-a821-f5c4c37264ec nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840471294 +0000 UTC m=+151.492613855 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert") pod "ingress-canary-8t729" (UID: "d58b6685-ca1a-4f73-a821-f5c4c37264ec") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340543 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics podName:c14a961b-4eb5-4a10-abe7-bdd5ddff30bc nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840530326 +0000 UTC m=+151.492672947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics") pod "marketplace-operator-79b997595-gm728" (UID: "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340567 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls podName:1f1e0355-7806-4025-88f6-992756ffbe86 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840556866 +0000 UTC m=+151.492699487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls") pod "dns-default-cszqz" (UID: "1f1e0355-7806-4025-88f6-992756ffbe86") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340588 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert podName:ce6c946f-c804-4b57-bc37-8169c677e231 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840577017 +0000 UTC m=+151.492719428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-4s68g" (UID: "ce6c946f-c804-4b57-bc37-8169c677e231") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340608 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle podName:72d314ec-8059-4f5b-b4b7-91372748623e nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840600218 +0000 UTC m=+151.492742629 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle") pod "service-ca-9c57cc56f-lgjgk" (UID: "72d314ec-8059-4f5b-b4b7-91372748623e") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340624 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token podName:7f909c09-273f-48a4-8ef1-eb80eb473c5e nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840616118 +0000 UTC m=+151.492758529 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token") pod "machine-config-server-jw49k" (UID: "7f909c09-273f-48a4-8ef1-eb80eb473c5e") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.340640 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs podName:7f909c09-273f-48a4-8ef1-eb80eb473c5e nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.840631939 +0000 UTC m=+151.492774350 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs") pod "machine-config-server-jw49k" (UID: "7f909c09-273f-48a4-8ef1-eb80eb473c5e") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.341996 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.342330 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.346848 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.349603 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.349836 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.849799838 +0000 UTC m=+151.501942239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.350033 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.353568 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.853539427 +0000 UTC m=+151.505681838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.374796 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.385426 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.387530 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.412603 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.423939 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.430222 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.452952 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.454528 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:18.953894929 +0000 UTC m=+151.606037340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.457388 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.469933 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.490855 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.492176 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq"] Nov 25 14:55:18 crc kubenswrapper[4806]: W1125 14:55:18.501508 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2be4e761_7ffb_42b6_8656_8f591d749624.slice/crio-f49a880d9c2e1dcf1573c29f7815be5515b3793535df174b11a987a9845eabf4 WatchSource:0}: Error finding container f49a880d9c2e1dcf1573c29f7815be5515b3793535df174b11a987a9845eabf4: Status 404 returned error can't find the container with id f49a880d9c2e1dcf1573c29f7815be5515b3793535df174b11a987a9845eabf4 Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.507144 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.526726 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.545747 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-g6w68"] Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.546616 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.556633 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.557075 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.057058795 +0000 UTC m=+151.709201206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.565707 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.578845 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl"] Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.587388 4806 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.601086 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx"] Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.607412 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.627474 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.647296 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.658068 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.658431 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.158398027 +0000 UTC m=+151.810540438 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.658615 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.659036 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr"] Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.659404 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.159386056 +0000 UTC m=+151.811528467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.666496 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 14:55:18 crc kubenswrapper[4806]: W1125 14:55:18.670304 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a93da81_98cb_4a53_9c02_60cc144ebf9d.slice/crio-bc748a075c19b43fb96c07718011257484dfcf0032895f411d6e965df57cdefd WatchSource:0}: Error finding container bc748a075c19b43fb96c07718011257484dfcf0032895f411d6e965df57cdefd: Status 404 returned error can't find the container with id bc748a075c19b43fb96c07718011257484dfcf0032895f411d6e965df57cdefd Nov 25 14:55:18 crc kubenswrapper[4806]: W1125 14:55:18.671050 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod307ebecf_190d_447f_ac14_28516ef87e6a.slice/crio-f85f8fbf098d46b6886dbce91c2280a4e0f4bf13bc79491aa51ed6be899d5ac9 WatchSource:0}: Error finding container f85f8fbf098d46b6886dbce91c2280a4e0f4bf13bc79491aa51ed6be899d5ac9: Status 404 returned error can't find the container with id f85f8fbf098d46b6886dbce91c2280a4e0f4bf13bc79491aa51ed6be899d5ac9 Nov 25 14:55:18 crc kubenswrapper[4806]: W1125 14:55:18.675066 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf49c7a82_aef3_47bf_a1bd_8b443b98be2d.slice/crio-e6a3f487821ac00fc7e69a23caf01b7f4b71eddea308d281efd61fcb0735fd8f WatchSource:0}: Error finding container e6a3f487821ac00fc7e69a23caf01b7f4b71eddea308d281efd61fcb0735fd8f: Status 404 returned error can't find the container with id e6a3f487821ac00fc7e69a23caf01b7f4b71eddea308d281efd61fcb0735fd8f Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.685662 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.697737 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.704607 4806 request.go:700] Waited for 1.870704124s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.706810 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.726944 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.746959 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.750983 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" event={"ID":"be0fd1be-42ae-4954-99f6-14807b522398","Type":"ContainerStarted","Data":"3f9f59e16baf2e296ac3e2877944a3e648144afbda5ddf9cbd0faf79d72638fc"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.751981 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" event={"ID":"307ebecf-190d-447f-ac14-28516ef87e6a","Type":"ContainerStarted","Data":"f85f8fbf098d46b6886dbce91c2280a4e0f4bf13bc79491aa51ed6be899d5ac9"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.752843 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" event={"ID":"4bb1d689-2d28-457a-9c48-0b21c3ac56b2","Type":"ContainerStarted","Data":"0e9d16718feaa4f8f669bb7887b5e404d5c8aa4a381a099443849f52b2b1f104"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.753951 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" event={"ID":"2be4e761-7ffb-42b6-8656-8f591d749624","Type":"ContainerStarted","Data":"f49a880d9c2e1dcf1573c29f7815be5515b3793535df174b11a987a9845eabf4"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.754844 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" event={"ID":"3a93da81-98cb-4a53-9c02-60cc144ebf9d","Type":"ContainerStarted","Data":"bc748a075c19b43fb96c07718011257484dfcf0032895f411d6e965df57cdefd"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.755705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" event={"ID":"f49c7a82-aef3-47bf-a1bd-8b443b98be2d","Type":"ContainerStarted","Data":"e6a3f487821ac00fc7e69a23caf01b7f4b71eddea308d281efd61fcb0735fd8f"} Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.759465 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.760156 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.260139221 +0000 UTC m=+151.912281632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.766057 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.803083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcmmp\" (UniqueName: \"kubernetes.io/projected/923b096b-4da2-4e3e-8c86-b3715c249ac0-kube-api-access-qcmmp\") pod \"authentication-operator-69f744f599-xklng\" (UID: \"923b096b-4da2-4e3e-8c86-b3715c249ac0\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.828529 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v5pb\" (UniqueName: \"kubernetes.io/projected/f9b1a29e-c5b3-45fd-9082-b46293956184-kube-api-access-7v5pb\") pod \"downloads-7954f5f757-xx6dj\" (UID: \"f9b1a29e-c5b3-45fd-9082-b46293956184\") " pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.841520 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l6n4\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.860934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.860983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861016 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861099 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861168 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861214 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861233 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861284 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861308 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.861363 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.862148 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.362130862 +0000 UTC m=+152.014273273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.863061 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.863996 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/72d314ec-8059-4f5b-b4b7-91372748623e-signing-cabundle\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.864447 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcpp\" (UniqueName: \"kubernetes.io/projected/a81fbfae-81cd-4b3a-a2ef-771ca4884793-kube-api-access-fmcpp\") pod \"console-operator-58897d9998-p957m\" (UID: \"a81fbfae-81cd-4b3a-a2ef-771ca4884793\") " pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.864622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/72d314ec-8059-4f5b-b4b7-91372748623e-signing-key\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.865613 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-node-bootstrap-token\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.865643 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d58b6685-ca1a-4f73-a821-f5c4c37264ec-cert\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.868529 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ce6c946f-c804-4b57-bc37-8169c677e231-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.868536 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7f909c09-273f-48a4-8ef1-eb80eb473c5e-certs\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.868642 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.869774 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.878518 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.882640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.924776 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfch\" (UniqueName: \"kubernetes.io/projected/3e22d0ac-ad84-41cc-9e33-de5c90e61f2c-kube-api-access-5rfch\") pod \"catalog-operator-68c6474976-9shgk\" (UID: \"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.934941 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.934993 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.943172 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt864\" (UniqueName: \"kubernetes.io/projected/7f5cd5de-2e48-4c15-9c5e-f20368bc172b-kube-api-access-jt864\") pod \"control-plane-machine-set-operator-78cbb6b69f-6hqx6\" (UID: \"7f5cd5de-2e48-4c15-9c5e-f20368bc172b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.962042 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.962216 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.462182296 +0000 UTC m=+152.114324717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.962397 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:18 crc kubenswrapper[4806]: E1125 14:55:18.962823 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.462812455 +0000 UTC m=+152.114954936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.962907 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-277xv\" (UniqueName: \"kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv\") pod \"collect-profiles-29401365-h6lh4\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.975187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1e0355-7806-4025-88f6-992756ffbe86-config-volume\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.981975 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1f1e0355-7806-4025-88f6-992756ffbe86-metrics-tls\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:18 crc kubenswrapper[4806]: I1125 14:55:18.987804 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl8r\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-kube-api-access-9sl8r\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.002196 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.002941 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq74x\" (UniqueName: \"kubernetes.io/projected/4e9e656c-2e2c-4ed4-b720-8fdb639a029d-kube-api-access-qq74x\") pod \"router-default-5444994796-kfst9\" (UID: \"4e9e656c-2e2c-4ed4-b720-8fdb639a029d\") " pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.025887 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx4mf\" (UniqueName: \"kubernetes.io/projected/7f909c09-273f-48a4-8ef1-eb80eb473c5e-kube-api-access-hx4mf\") pod \"machine-config-server-jw49k\" (UID: \"7f909c09-273f-48a4-8ef1-eb80eb473c5e\") " pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.040021 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.043162 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79zpf\" (UniqueName: \"kubernetes.io/projected/b43d27a6-a9d7-484a-a8d4-f12e06bce31f-kube-api-access-79zpf\") pod \"multus-admission-controller-857f4d67dd-7jcqc\" (UID: \"b43d27a6-a9d7-484a-a8d4-f12e06bce31f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.059605 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.063549 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.063695 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.563670413 +0000 UTC m=+152.215812824 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.064172 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kg5m\" (UniqueName: \"kubernetes.io/projected/2fe464df-b275-4f86-8750-6052a803b024-kube-api-access-2kg5m\") pod \"packageserver-d55dfcdfc-28dbr\" (UID: \"2fe464df-b275-4f86-8750-6052a803b024\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.064741 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.065973 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.56594936 +0000 UTC m=+152.218091771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.083792 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.084145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4sh9\" (UniqueName: \"kubernetes.io/projected/3ad5dac9-54d3-4435-8f38-77e91d1965e0-kube-api-access-n4sh9\") pod \"cluster-samples-operator-665b6dd947-gfbwx\" (UID: \"3ad5dac9-54d3-4435-8f38-77e91d1965e0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.106139 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.119972 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.120140 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczsc\" (UniqueName: \"kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc\") pod \"console-f9d7485db-6j244\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.120669 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.126329 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk8qz\" (UniqueName: \"kubernetes.io/projected/76a76f7a-7f38-4aac-8a57-a60f332306cb-kube-api-access-kk8qz\") pod \"etcd-operator-b45778765-zf4ph\" (UID: \"76a76f7a-7f38-4aac-8a57-a60f332306cb\") " pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.146712 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwpf7\" (UniqueName: \"kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7\") pod \"controller-manager-879f6c89f-k8p4x\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.166597 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.166770 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6zsz\" (UniqueName: \"kubernetes.io/projected/2e32043e-a11b-473b-b42a-ecc01450a942-kube-api-access-q6zsz\") pod \"machine-config-operator-74547568cd-j4l9j\" (UID: \"2e32043e-a11b-473b-b42a-ecc01450a942\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.166933 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.666912251 +0000 UTC m=+152.319054672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.167239 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.167894 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.667874139 +0000 UTC m=+152.320016550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.191525 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68hf2\" (UniqueName: \"kubernetes.io/projected/72d314ec-8059-4f5b-b4b7-91372748623e-kube-api-access-68hf2\") pod \"service-ca-9c57cc56f-lgjgk\" (UID: \"72d314ec-8059-4f5b-b4b7-91372748623e\") " pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.202379 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jw49k" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.229469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724h5\" (UniqueName: \"kubernetes.io/projected/ce6c946f-c804-4b57-bc37-8169c677e231-kube-api-access-724h5\") pod \"package-server-manager-789f6589d5-4s68g\" (UID: \"ce6c946f-c804-4b57-bc37-8169c677e231\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.230082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.244107 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fz94\" (UniqueName: \"kubernetes.io/projected/7142eedd-c71b-4c92-97a8-def92a981529-kube-api-access-5fz94\") pod \"service-ca-operator-777779d784-h4m8m\" (UID: \"7142eedd-c71b-4c92-97a8-def92a981529\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.246872 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.256907 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpjrb\" (UniqueName: \"kubernetes.io/projected/9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee-kube-api-access-fpjrb\") pod \"csi-hostpathplugin-x92cw\" (UID: \"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee\") " pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.269668 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.277540 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.777502494 +0000 UTC m=+152.429644905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.277896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.278483 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.778463942 +0000 UTC m=+152.430606353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.279220 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4qnq\" (UniqueName: \"kubernetes.io/projected/40730d61-24e2-4810-89f7-0a34fe204440-kube-api-access-b4qnq\") pod \"apiserver-7bbb656c7d-mvkmg\" (UID: \"40730d61-24e2-4810-89f7-0a34fe204440\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.282255 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.289970 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.291003 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/41fbdcab-7837-4273-8aaa-70b4e1667988-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-fhvbk\" (UID: \"41fbdcab-7837-4273-8aaa-70b4e1667988\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.304756 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.311493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjhv\" (UniqueName: \"kubernetes.io/projected/0aa34022-429c-4bba-91a8-229a7b634a50-kube-api-access-bgjhv\") pod \"machine-approver-56656f9798-gjw2g\" (UID: \"0aa34022-429c-4bba-91a8-229a7b634a50\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.318742 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-xklng"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.326484 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3f3d083b-5922-4da3-ad9e-e5f323836cba-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ptx4l\" (UID: \"3f3d083b-5922-4da3-ad9e-e5f323836cba\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.353443 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-675rv\" (UniqueName: \"kubernetes.io/projected/1531828a-4e80-4d77-92c0-99e9ae888fae-kube-api-access-675rv\") pod \"migrator-59844c95c7-2nrmh\" (UID: \"1531828a-4e80-4d77-92c0-99e9ae888fae\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.369362 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.375068 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.378736 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.379266 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.879247828 +0000 UTC m=+152.531390239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.387468 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5762\" (UniqueName: \"kubernetes.io/projected/17ede0a7-8694-488d-822c-47e76211a19f-kube-api-access-b5762\") pod \"olm-operator-6b444d44fb-tx5m5\" (UID: \"17ede0a7-8694-488d-822c-47e76211a19f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.394514 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4wjr\" (UniqueName: \"kubernetes.io/projected/f394b01a-b495-4acf-bca9-0b23347a3358-kube-api-access-k4wjr\") pod \"machine-api-operator-5694c8668f-9tjs2\" (UID: \"f394b01a-b495-4acf-bca9-0b23347a3358\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.406704 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/024b2329-b8db-400c-bbaa-f77ba9a3bdae-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-vj65b\" (UID: \"024b2329-b8db-400c-bbaa-f77ba9a3bdae\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.437202 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.438714 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnlbk\" (UniqueName: \"kubernetes.io/projected/16a8fa04-87f4-46fa-a310-aa62275684c0-kube-api-access-xnlbk\") pod \"machine-config-controller-84d6567774-grv4v\" (UID: \"16a8fa04-87f4-46fa-a310-aa62275684c0\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.444293 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.452836 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2pl\" (UniqueName: \"kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl\") pod \"marketplace-operator-79b997595-gm728\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.467211 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr7w6\" (UniqueName: \"kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6\") pod \"route-controller-manager-6576b87f9c-p5tx2\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.475689 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xx6dj"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.478468 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-p957m"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.484333 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.484767 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:19.984750622 +0000 UTC m=+152.636893033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.485243 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.501058 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fcbcb3e-8a88-465d-9b1e-8e547844bd93-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-n727s\" (UID: \"0fcbcb3e-8a88-465d-9b1e-8e547844bd93\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.504169 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd7x8\" (UniqueName: \"kubernetes.io/projected/d58b6685-ca1a-4f73-a821-f5c4c37264ec-kube-api-access-hd7x8\") pod \"ingress-canary-8t729\" (UID: \"d58b6685-ca1a-4f73-a821-f5c4c37264ec\") " pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.521110 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.527073 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld6b7\" (UniqueName: \"kubernetes.io/projected/1f1e0355-7806-4025-88f6-992756ffbe86-kube-api-access-ld6b7\") pod \"dns-default-cszqz\" (UID: \"1f1e0355-7806-4025-88f6-992756ffbe86\") " pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.538659 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.541377 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrj8h\" (UniqueName: \"kubernetes.io/projected/97b5ca54-68e2-4db9-84fa-a77e3f61735e-kube-api-access-xrj8h\") pod \"kube-storage-version-migrator-operator-b67b599dd-5ppwt\" (UID: \"97b5ca54-68e2-4db9-84fa-a77e3f61735e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:19 crc kubenswrapper[4806]: W1125 14:55:19.544990 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b1a29e_c5b3_45fd_9082_b46293956184.slice/crio-1acc31358c0d2faf843823fa164ee8670af7784e8e13aa36b9c8d525e3f64278 WatchSource:0}: Error finding container 1acc31358c0d2faf843823fa164ee8670af7784e8e13aa36b9c8d525e3f64278: Status 404 returned error can't find the container with id 1acc31358c0d2faf843823fa164ee8670af7784e8e13aa36b9c8d525e3f64278 Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.554460 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.565227 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.575534 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.585934 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.586514 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.086483566 +0000 UTC m=+152.738625977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.596858 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.612671 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.620846 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.638651 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.648644 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.687946 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.688395 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.188383245 +0000 UTC m=+152.840525656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.693188 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.729728 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.758103 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.788839 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.788984 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.288956264 +0000 UTC m=+152.941098675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.789079 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.789246 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" event={"ID":"2be4e761-7ffb-42b6-8656-8f591d749624","Type":"ContainerStarted","Data":"de37b7b912793bd0ee43412f990a271ceb6ac4d3e9cfe4fb81a549dd7108e55d"} Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.789426 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.289414158 +0000 UTC m=+152.941556569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.791029 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx6dj" event={"ID":"f9b1a29e-c5b3-45fd-9082-b46293956184","Type":"ContainerStarted","Data":"1acc31358c0d2faf843823fa164ee8670af7784e8e13aa36b9c8d525e3f64278"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.793137 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" event={"ID":"923b096b-4da2-4e3e-8c86-b3715c249ac0","Type":"ContainerStarted","Data":"c5181da6422b92908936101972362febc8cfb6ce64c560822924bd1c8ec4a06c"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.797027 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8t729" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.799926 4806 generic.go:334] "Generic (PLEG): container finished" podID="3a93da81-98cb-4a53-9c02-60cc144ebf9d" containerID="abdf414a40ae76d0a02016e27c1d28b8bc586c31875e578f7933bc06360c1a14" exitCode=0 Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.799994 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" event={"ID":"3a93da81-98cb-4a53-9c02-60cc144ebf9d","Type":"ContainerDied","Data":"abdf414a40ae76d0a02016e27c1d28b8bc586c31875e578f7933bc06360c1a14"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.802781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kfst9" event={"ID":"4e9e656c-2e2c-4ed4-b720-8fdb639a029d","Type":"ContainerStarted","Data":"54cf02f1f64d95142d9f94470d85b0c5ce4dd701bd3752d6f199c566845ad444"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.802859 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kfst9" event={"ID":"4e9e656c-2e2c-4ed4-b720-8fdb639a029d","Type":"ContainerStarted","Data":"56cecb25b2597f5d4faf3e4485f3f9cfa5841214252bcb89a9f94fd844dd93ab"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.809278 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p957m" event={"ID":"a81fbfae-81cd-4b3a-a2ef-771ca4884793","Type":"ContainerStarted","Data":"02b5c1f3eb55bc1b12e9b959dcbc48745b829af2f16d7376db5f3cf6a78d08fb"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.831554 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" event={"ID":"f49c7a82-aef3-47bf-a1bd-8b443b98be2d","Type":"ContainerStarted","Data":"6ae756b1f56c2cff9d401df02a60d4da9fdfe80fd2d0de8c45c9fc7774ce94a6"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.845955 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.860443 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" event={"ID":"307ebecf-190d-447f-ac14-28516ef87e6a","Type":"ContainerStarted","Data":"5429f5a863e68a9b88d162729433c1b25b79ef660a739b7666ed906d8d769dc8"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.867052 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" event={"ID":"4bb1d689-2d28-457a-9c48-0b21c3ac56b2","Type":"ContainerStarted","Data":"764e9b7857a9fa4ba24e2b2191781ed2d9ea0a816f2db596931d4d40a9fccb07"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.885418 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" event={"ID":"ca7da513-6cf5-43fc-afbe-ab1c8e785130","Type":"ContainerStarted","Data":"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.885467 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" event={"ID":"ca7da513-6cf5-43fc-afbe-ab1c8e785130","Type":"ContainerStarted","Data":"fcb05b9a4dcfee75c1c6e6cf53effecb6a44f613e0ebd64be2aaf216b3a8f44f"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.885922 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.886297 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7jcqc"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.887521 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.888400 4806 generic.go:334] "Generic (PLEG): container finished" podID="be0fd1be-42ae-4954-99f6-14807b522398" containerID="b430cf2d6f1bbaadee2a4f2fd0bb24a91cf5439c53e94fad60dc29cab156cee7" exitCode=0 Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.888753 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" event={"ID":"be0fd1be-42ae-4954-99f6-14807b522398","Type":"ContainerDied","Data":"b430cf2d6f1bbaadee2a4f2fd0bb24a91cf5439c53e94fad60dc29cab156cee7"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.889676 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.890981 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.390958746 +0000 UTC m=+153.043101157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.896289 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jw49k" event={"ID":"7f909c09-273f-48a4-8ef1-eb80eb473c5e","Type":"ContainerStarted","Data":"01607da864c9d733b1ddf5cfb4bff661086cc4b338a4ef43a070dead493ba02f"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.896351 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jw49k" event={"ID":"7f909c09-273f-48a4-8ef1-eb80eb473c5e","Type":"ContainerStarted","Data":"70700a5edafff1049c240166e71b2dd3519c5fc36078f7ba89d28cefad73b624"} Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.914118 4806 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bn2sz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.914195 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Nov 25 14:55:19 crc kubenswrapper[4806]: W1125 14:55:19.917111 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0aa34022_429c_4bba_91a8_229a7b634a50.slice/crio-2220642efdca18dcde9accc5561f75793922a2bfceebfe05d8a6cbaa25485665 WatchSource:0}: Error finding container 2220642efdca18dcde9accc5561f75793922a2bfceebfe05d8a6cbaa25485665: Status 404 returned error can't find the container with id 2220642efdca18dcde9accc5561f75793922a2bfceebfe05d8a6cbaa25485665 Nov 25 14:55:19 crc kubenswrapper[4806]: W1125 14:55:19.926759 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e22d0ac_ad84_41cc_9e33_de5c90e61f2c.slice/crio-077a07d50fce4102bb7b607b0025590b5e775fb4f8d694bdf16662137c0fe64b WatchSource:0}: Error finding container 077a07d50fce4102bb7b607b0025590b5e775fb4f8d694bdf16662137c0fe64b: Status 404 returned error can't find the container with id 077a07d50fce4102bb7b607b0025590b5e775fb4f8d694bdf16662137c0fe64b Nov 25 14:55:19 crc kubenswrapper[4806]: W1125 14:55:19.936206 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5cd5de_2e48_4c15_9c5e_f20368bc172b.slice/crio-92211f166bd397d946c0a314bf36799f9fe21a45b0a58cd75b703e06de976db2 WatchSource:0}: Error finding container 92211f166bd397d946c0a314bf36799f9fe21a45b0a58cd75b703e06de976db2: Status 404 returned error can't find the container with id 92211f166bd397d946c0a314bf36799f9fe21a45b0a58cd75b703e06de976db2 Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.979949 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4"] Nov 25 14:55:19 crc kubenswrapper[4806]: I1125 14:55:19.991458 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:19 crc kubenswrapper[4806]: E1125 14:55:19.993150 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.493138973 +0000 UTC m=+153.145281384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.093137 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.093509 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.593488566 +0000 UTC m=+153.245630977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.095410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.095712 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.595703441 +0000 UTC m=+153.247845842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.115076 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-zf4ph"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.115115 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.115542 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.123472 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.124008 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j"] Nov 25 14:55:20 crc kubenswrapper[4806]: W1125 14:55:20.176854 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeac792f_d07c_446b_8dee_00f726ea273c.slice/crio-320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9 WatchSource:0}: Error finding container 320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9: Status 404 returned error can't find the container with id 320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9 Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.197683 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.197966 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.69794744 +0000 UTC m=+153.350089851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.277387 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.283539 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lgjgk"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.293057 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.298983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.299413 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.799397825 +0000 UTC m=+153.451540236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.400518 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.401112 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:20.901074777 +0000 UTC m=+153.553217188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.448440 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-trxgq" podStartSLOduration=132.448415566 podStartE2EDuration="2m12.448415566s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:20.441591135 +0000 UTC m=+153.093733556" watchObservedRunningTime="2025-11-25 14:55:20.448415566 +0000 UTC m=+153.100557977" Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.461694 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-gjhkx" podStartSLOduration=132.461673464 podStartE2EDuration="2m12.461673464s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:20.460041197 +0000 UTC m=+153.112183618" watchObservedRunningTime="2025-11-25 14:55:20.461673464 +0000 UTC m=+153.113815885" Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.502929 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.503338 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.003296675 +0000 UTC m=+153.655439086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.606553 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.606948 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.106924324 +0000 UTC m=+153.759066735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: W1125 14:55:20.608559 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72d314ec_8059_4f5b_b4b7_91372748623e.slice/crio-13108d8796c229484839555c9ee2c4feff69c59b8b2d9dd5d309e20d9344b351 WatchSource:0}: Error finding container 13108d8796c229484839555c9ee2c4feff69c59b8b2d9dd5d309e20d9344b351: Status 404 returned error can't find the container with id 13108d8796c229484839555c9ee2c4feff69c59b8b2d9dd5d309e20d9344b351 Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.662722 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-zfhjl" podStartSLOduration=132.66269454 podStartE2EDuration="2m12.66269454s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:20.661306979 +0000 UTC m=+153.313449390" watchObservedRunningTime="2025-11-25 14:55:20.66269454 +0000 UTC m=+153.314836951" Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.694182 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x92cw"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.695812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.701050 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.709787 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.710469 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.210455211 +0000 UTC m=+153.862597622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.718874 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.733149 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.744099 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-9tjs2"] Nov 25 14:55:20 crc kubenswrapper[4806]: W1125 14:55:20.781644 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3f9429a_5f3e_45bf_b7cc_dea3bee3e957.slice/crio-8a2b25d91ae8e8578871bf34fc8a9d3c620bd78f0741a299d315043a9a10fa4b WatchSource:0}: Error finding container 8a2b25d91ae8e8578871bf34fc8a9d3c620bd78f0741a299d315043a9a10fa4b: Status 404 returned error can't find the container with id 8a2b25d91ae8e8578871bf34fc8a9d3c620bd78f0741a299d315043a9a10fa4b Nov 25 14:55:20 crc kubenswrapper[4806]: W1125 14:55:20.801032 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7142eedd_c71b_4c92_97a8_def92a981529.slice/crio-97df113234f638ae55fcec5fc955197c16615378aaa701ff18ed146681fd57bc WatchSource:0}: Error finding container 97df113234f638ae55fcec5fc955197c16615378aaa701ff18ed146681fd57bc: Status 404 returned error can't find the container with id 97df113234f638ae55fcec5fc955197c16615378aaa701ff18ed146681fd57bc Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.810845 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.811221 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.311203616 +0000 UTC m=+153.963346027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.837290 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.839457 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.876511 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.900064 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.902340 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" podStartSLOduration=132.902304298 podStartE2EDuration="2m12.902304298s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:20.897918189 +0000 UTC m=+153.550060610" watchObservedRunningTime="2025-11-25 14:55:20.902304298 +0000 UTC m=+153.554446709" Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.913554 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:20 crc kubenswrapper[4806]: E1125 14:55:20.914098 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.414080323 +0000 UTC m=+154.066222734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.954130 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.979922 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" event={"ID":"2e32043e-a11b-473b-b42a-ecc01450a942","Type":"ContainerStarted","Data":"aad9a69a4f7947501f8cd9c8bf2c54117a30be77a4e1095dddfb93bfd87b878d"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.990136 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" event={"ID":"be0fd1be-42ae-4954-99f6-14807b522398","Type":"ContainerStarted","Data":"6ed7080fe25291be4fde7841a08c1f55cf7901de1b44e422713f65bdfb6583da"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.992174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" event={"ID":"923b096b-4da2-4e3e-8c86-b3715c249ac0","Type":"ContainerStarted","Data":"3a0e5f5a0590ab16807d72844dd87c594cadcbe4f19a1f9483d750fac5e177dc"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.995475 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" event={"ID":"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c","Type":"ContainerStarted","Data":"23149d7c4b1c891c65312db61a4f0d4831b8277ad04f1404e711f45050c78257"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.995536 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" event={"ID":"3e22d0ac-ad84-41cc-9e33-de5c90e61f2c","Type":"ContainerStarted","Data":"077a07d50fce4102bb7b607b0025590b5e775fb4f8d694bdf16662137c0fe64b"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.996623 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" event={"ID":"16a8fa04-87f4-46fa-a310-aa62275684c0","Type":"ContainerStarted","Data":"300f776c6da9ee6287b2d455de145e153ddf92bba28fc52ef900b59310d94d33"} Nov 25 14:55:20 crc kubenswrapper[4806]: I1125 14:55:20.997494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" event={"ID":"eeac792f-d07c-446b-8dee-00f726ea273c","Type":"ContainerStarted","Data":"320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.003222 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" event={"ID":"3a93da81-98cb-4a53-9c02-60cc144ebf9d","Type":"ContainerStarted","Data":"13297ce2df5964185469ca63aef054ec7ad83098dcb02285af92f8cf02ad311c"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.011708 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" event={"ID":"b43d27a6-a9d7-484a-a8d4-f12e06bce31f","Type":"ContainerStarted","Data":"b0726ff9f9d341d5f6c698ae9ba7e2236b89d9c07d225088c54933a724e1c045"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.014572 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.014686 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" event={"ID":"3f3d083b-5922-4da3-ad9e-e5f323836cba","Type":"ContainerStarted","Data":"cf2ed9d851d55f91a16a13107eb5098b776f2c457cc29edae5114fd9e8db9a3c"} Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.016795 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.516122966 +0000 UTC m=+154.168265377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.020151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.022994 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.522973827 +0000 UTC m=+154.175116238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.023751 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jw49k" podStartSLOduration=5.023730839 podStartE2EDuration="5.023730839s" podCreationTimestamp="2025-11-25 14:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:21.016923889 +0000 UTC m=+153.669066300" watchObservedRunningTime="2025-11-25 14:55:21.023730839 +0000 UTC m=+153.675873250" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.028775 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh"] Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.030762 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6j244" event={"ID":"b8400987-b2f7-44fe-b1b3-8689c2465cd3","Type":"ContainerStarted","Data":"8f7f775dac024ec071ac39fbaac38bb03ffb868677b32fe3aaa6ba31e01f8405"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.031886 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s"] Nov 25 14:55:21 crc kubenswrapper[4806]: W1125 14:55:21.032561 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc14a961b_4eb5_4a10_abe7_bdd5ddff30bc.slice/crio-9d3f05fce218e60204e82981da82c6aad5de6ff37630480238a4caf975fafc5a WatchSource:0}: Error finding container 9d3f05fce218e60204e82981da82c6aad5de6ff37630480238a4caf975fafc5a: Status 404 returned error can't find the container with id 9d3f05fce218e60204e82981da82c6aad5de6ff37630480238a4caf975fafc5a Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.038131 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-p957m" event={"ID":"a81fbfae-81cd-4b3a-a2ef-771ca4884793","Type":"ContainerStarted","Data":"6b28af5eb684bdcb6eac09f7c33dee55f73b8644a5856ba6c008f5e837503c6b"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.038793 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.040119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" event={"ID":"76a76f7a-7f38-4aac-8a57-a60f332306cb","Type":"ContainerStarted","Data":"bdb8e2c1e0ed657f16c752c4dc9bc138105f380d70369896278cbac413fcc13a"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.042931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" event={"ID":"83db970d-f5a9-4a8f-9c65-0cd2500331b1","Type":"ContainerStarted","Data":"a61f11959e3a547f5786697f7734844b2d197305e20dcbc491f04c7528612074"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.047933 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" event={"ID":"0aa34022-429c-4bba-91a8-229a7b634a50","Type":"ContainerStarted","Data":"2220642efdca18dcde9accc5561f75793922a2bfceebfe05d8a6cbaa25485665"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.053196 4806 patch_prober.go:28] interesting pod/console-operator-58897d9998-p957m container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.053269 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p957m" podUID="a81fbfae-81cd-4b3a-a2ef-771ca4884793" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.053753 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" event={"ID":"2fe464df-b275-4f86-8750-6052a803b024","Type":"ContainerStarted","Data":"288ee1e15c4f7adc227c149b5c863fe4249e90ac06c920585227f69fc0f05282"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.062163 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" event={"ID":"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee","Type":"ContainerStarted","Data":"4a89a3ecd068712a9b22a4cf5080c830c001a8c9d6dff07ee2b10fb74f74fcde"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.063171 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" event={"ID":"ce6c946f-c804-4b57-bc37-8169c677e231","Type":"ContainerStarted","Data":"94fa2dbac5afd633f921e4ee37c857e842e638846a28de9418495c121a19c89e"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.071733 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" event={"ID":"41fbdcab-7837-4273-8aaa-70b4e1667988","Type":"ContainerStarted","Data":"4ab45a16e14d8447a5642591d2a304deef4ebcd1abfb7373684946e9f50c5acd"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.074981 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-cszqz"] Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.078353 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" event={"ID":"72d314ec-8059-4f5b-b4b7-91372748623e","Type":"ContainerStarted","Data":"13108d8796c229484839555c9ee2c4feff69c59b8b2d9dd5d309e20d9344b351"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.082635 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8t729"] Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.084741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx6dj" event={"ID":"f9b1a29e-c5b3-45fd-9082-b46293956184","Type":"ContainerStarted","Data":"1317cc9662172dd91ce8ec60bdaf6b67dfd54aeb20377694855c5de89dfa08ba"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.084787 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.087072 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" event={"ID":"024b2329-b8db-400c-bbaa-f77ba9a3bdae","Type":"ContainerStarted","Data":"4840a1ec9fcaf00e00459e07c90224924290cf47af6a0750564156bf7d32afab"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.087950 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.087999 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.090073 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" event={"ID":"4bb1d689-2d28-457a-9c48-0b21c3ac56b2","Type":"ContainerStarted","Data":"bb3f10223a3a8366d24b1395aad2bfc1d126db09d74eabf3e4a571ef2bfcd9c0"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.092110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" event={"ID":"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957","Type":"ContainerStarted","Data":"8a2b25d91ae8e8578871bf34fc8a9d3c620bd78f0741a299d315043a9a10fa4b"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.096538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" event={"ID":"7142eedd-c71b-4c92-97a8-def92a981529","Type":"ContainerStarted","Data":"97df113234f638ae55fcec5fc955197c16615378aaa701ff18ed146681fd57bc"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.101679 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5"] Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.103895 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" event={"ID":"7f5cd5de-2e48-4c15-9c5e-f20368bc172b","Type":"ContainerStarted","Data":"92211f166bd397d946c0a314bf36799f9fe21a45b0a58cd75b703e06de976db2"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.107106 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" event={"ID":"f394b01a-b495-4acf-bca9-0b23347a3358","Type":"ContainerStarted","Data":"47e4e71b27d81581e341ce5f9ef67aff95a0925b9c2fcb61fb9fa36f9842fd95"} Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.108498 4806 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-bn2sz container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.108577 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.123030 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.123229 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.623204356 +0000 UTC m=+154.275346777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.123481 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.125798 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.625782402 +0000 UTC m=+154.277924813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: W1125 14:55:21.199533 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f1e0355_7806_4025_88f6_992756ffbe86.slice/crio-a919384a314dbdf60b44eb3941c30d3bc23b23455a205e71d9186214a3ca1443 WatchSource:0}: Error finding container a919384a314dbdf60b44eb3941c30d3bc23b23455a205e71d9186214a3ca1443: Status 404 returned error can't find the container with id a919384a314dbdf60b44eb3941c30d3bc23b23455a205e71d9186214a3ca1443 Nov 25 14:55:21 crc kubenswrapper[4806]: W1125 14:55:21.200232 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd58b6685_ca1a_4f73_a821_f5c4c37264ec.slice/crio-c5508e929bc7287db8fc521e085ec143f7cd47ba8fc108121d974e73ae526069 WatchSource:0}: Error finding container c5508e929bc7287db8fc521e085ec143f7cd47ba8fc108121d974e73ae526069: Status 404 returned error can't find the container with id c5508e929bc7287db8fc521e085ec143f7cd47ba8fc108121d974e73ae526069 Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.233984 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.244152 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.744108592 +0000 UTC m=+154.396251213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.337702 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.338175 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.838154381 +0000 UTC m=+154.490296852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.421865 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-4c9r4" podStartSLOduration=132.421829845 podStartE2EDuration="2m12.421829845s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:21.417685073 +0000 UTC m=+154.069827504" watchObservedRunningTime="2025-11-25 14:55:21.421829845 +0000 UTC m=+154.073972256" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.440286 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.443153 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:21.943125029 +0000 UTC m=+154.595267430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.542823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.543166 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.043149763 +0000 UTC m=+154.695292174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.578063 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xx6dj" podStartSLOduration=133.578042176 podStartE2EDuration="2m13.578042176s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:21.575954595 +0000 UTC m=+154.228097036" watchObservedRunningTime="2025-11-25 14:55:21.578042176 +0000 UTC m=+154.230184587" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.643756 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.644289 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.144269099 +0000 UTC m=+154.796411500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.704483 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-p957m" podStartSLOduration=133.704463334 podStartE2EDuration="2m13.704463334s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:21.657173277 +0000 UTC m=+154.309315778" watchObservedRunningTime="2025-11-25 14:55:21.704463334 +0000 UTC m=+154.356605755" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.745504 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.746017 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.246004992 +0000 UTC m=+154.898147403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.847114 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.847653 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.347627783 +0000 UTC m=+154.999770204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.848092 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.848466 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.348455827 +0000 UTC m=+155.000598238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.910559 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-xklng" podStartSLOduration=133.910539728 podStartE2EDuration="2m13.910539728s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:21.907848359 +0000 UTC m=+154.559990770" watchObservedRunningTime="2025-11-25 14:55:21.910539728 +0000 UTC m=+154.562682139" Nov 25 14:55:21 crc kubenswrapper[4806]: I1125 14:55:21.949352 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:21 crc kubenswrapper[4806]: E1125 14:55:21.949743 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.449722396 +0000 UTC m=+155.101864817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.040482 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.050948 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.051369 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.551351037 +0000 UTC m=+155.203493438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.111625 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" event={"ID":"f394b01a-b495-4acf-bca9-0b23347a3358","Type":"ContainerStarted","Data":"8baa2c5cd9ece3361ac7c234e90bf8b607f8bcea763de8c7d080c2ac08fbd8e1"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.112916 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" event={"ID":"2e32043e-a11b-473b-b42a-ecc01450a942","Type":"ContainerStarted","Data":"decae85b9f5ca437da82e81a05c7654e7515090166fae60176e8e2af502431e5"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.113870 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" event={"ID":"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc","Type":"ContainerStarted","Data":"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.113896 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" event={"ID":"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc","Type":"ContainerStarted","Data":"9d3f05fce218e60204e82981da82c6aad5de6ff37630480238a4caf975fafc5a"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.114646 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8t729" event={"ID":"d58b6685-ca1a-4f73-a821-f5c4c37264ec","Type":"ContainerStarted","Data":"c5508e929bc7287db8fc521e085ec143f7cd47ba8fc108121d974e73ae526069"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.115365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" event={"ID":"17ede0a7-8694-488d-822c-47e76211a19f","Type":"ContainerStarted","Data":"d01b7f73e262bb81842da5a4e170837d8e920dbbcf2b1e40c3d8e6b80ca39601"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.116046 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" event={"ID":"1531828a-4e80-4d77-92c0-99e9ae888fae","Type":"ContainerStarted","Data":"ba056f883813c5b1d7bd41a6209448214c22afc12172d45c4bd200f4f79dc2e8"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.116950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" event={"ID":"16a8fa04-87f4-46fa-a310-aa62275684c0","Type":"ContainerStarted","Data":"723fedc623a3df78dc886a39f2c7531ea5a82c7dbb7af81b4010e7bcf3326a3f"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.118074 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6j244" event={"ID":"b8400987-b2f7-44fe-b1b3-8689c2465cd3","Type":"ContainerStarted","Data":"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.119701 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" event={"ID":"0aa34022-429c-4bba-91a8-229a7b634a50","Type":"ContainerStarted","Data":"5ed14e522ef5156dc3e1b203e4474e13228794f69c5af9afb6f0b330d48f0c46"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.120898 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" event={"ID":"b43d27a6-a9d7-484a-a8d4-f12e06bce31f","Type":"ContainerStarted","Data":"3204beb6c1acad8a2bf95ac47a0b98d92e46a227848581f981145e79e4b540c5"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.121999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" event={"ID":"97b5ca54-68e2-4db9-84fa-a77e3f61735e","Type":"ContainerStarted","Data":"fbb067e8f1ac57f50a174d3123222f59b54279b7e42ce8a2ce295e7820d94738"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.122056 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" event={"ID":"97b5ca54-68e2-4db9-84fa-a77e3f61735e","Type":"ContainerStarted","Data":"9f969a4927591771f20898b1b647537f2646b5594b66a93f4c40716602038654"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.123327 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" event={"ID":"2fe464df-b275-4f86-8750-6052a803b024","Type":"ContainerStarted","Data":"ba7ae66870ea2981d5296e7f207e32738c921c406e798f0171334e8d3707ec7c"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.124287 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.128001 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" event={"ID":"41fbdcab-7837-4273-8aaa-70b4e1667988","Type":"ContainerStarted","Data":"05c95608301d683507759802ec8a639fc417f2cde368db3c8d9998d496812d3a"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.129614 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" event={"ID":"76a76f7a-7f38-4aac-8a57-a60f332306cb","Type":"ContainerStarted","Data":"8cc609a8039c233d4453741d584cec2965c72302318c556e61f7601718ebe762"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.130989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" event={"ID":"ce6c946f-c804-4b57-bc37-8169c677e231","Type":"ContainerStarted","Data":"b6f4d3a21fa0dbf9ecb372ad3c6adfb7c07c8c935bf8e048964b3d7bad12e941"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.132646 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" event={"ID":"7142eedd-c71b-4c92-97a8-def92a981529","Type":"ContainerStarted","Data":"8c8812a2391d3b087d877bccfa6f2b4600b48c7eb91dc60dad82cb083d242d41"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.137139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" event={"ID":"7f5cd5de-2e48-4c15-9c5e-f20368bc172b","Type":"ContainerStarted","Data":"68243687caa9d5eb7fb8578aad7bd71460e316f4bace5dd5e4abaaaf4328149a"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.139458 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" event={"ID":"83db970d-f5a9-4a8f-9c65-0cd2500331b1","Type":"ContainerStarted","Data":"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.140233 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.141611 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cszqz" event={"ID":"1f1e0355-7806-4025-88f6-992756ffbe86","Type":"ContainerStarted","Data":"a919384a314dbdf60b44eb3941c30d3bc23b23455a205e71d9186214a3ca1443"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.142760 4806 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k8p4x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.142816 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.142774 4806 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-28dbr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.143148 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" podUID="2fe464df-b275-4f86-8750-6052a803b024" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.143599 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" event={"ID":"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957","Type":"ContainerStarted","Data":"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.143960 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.145123 4806 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p5tx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.145161 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.147748 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" event={"ID":"eeac792f-d07c-446b-8dee-00f726ea273c","Type":"ContainerStarted","Data":"634d1250bfff81468d7902be16ec50a49c8d117c5155faaca6bf158cdb440fdf"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.153866 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.155555 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.655523432 +0000 UTC m=+155.307665843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.156870 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" event={"ID":"3ad5dac9-54d3-4435-8f38-77e91d1965e0","Type":"ContainerStarted","Data":"27e493caa61c5486984173d36a3d09dca4043ef6cb0822cac08fc0bdc2544f34"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.156950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" event={"ID":"3ad5dac9-54d3-4435-8f38-77e91d1965e0","Type":"ContainerStarted","Data":"19c0024bf57ad2b2a3918ab23b438e76fd4843c0500d70c3d4149458d0c796a8"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.159732 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" event={"ID":"40730d61-24e2-4810-89f7-0a34fe204440","Type":"ContainerStarted","Data":"33c2f4890e75c7e557a57d0d51cd5f868e1dd2a166df7f47de7e029926cbcb51"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.161344 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" event={"ID":"3f3d083b-5922-4da3-ad9e-e5f323836cba","Type":"ContainerStarted","Data":"874aed21e8452ebc9e63e977d0ba05637753260decc99bf7c2e13618dbb29897"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.168324 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" event={"ID":"72d314ec-8059-4f5b-b4b7-91372748623e","Type":"ContainerStarted","Data":"59c7ecd940285e6ef25616597ef1893328e0f22f3b0f4db13fedde480e634c99"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.169541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" event={"ID":"0fcbcb3e-8a88-465d-9b1e-8e547844bd93","Type":"ContainerStarted","Data":"9548c75422ec63e0077cc05823f5bc6619c5d59fd16379440e892208927cc1ce"} Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.170084 4806 patch_prober.go:28] interesting pod/console-operator-58897d9998-p957m container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.170127 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-p957m" podUID="a81fbfae-81cd-4b3a-a2ef-771ca4884793" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": dial tcp 10.217.0.17:8443: connect: connection refused" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.170154 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.170193 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.184879 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kfst9" podStartSLOduration=133.184857542 podStartE2EDuration="2m13.184857542s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.184686367 +0000 UTC m=+154.836828778" watchObservedRunningTime="2025-11-25 14:55:22.184857542 +0000 UTC m=+154.836999953" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.228091 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" podStartSLOduration=133.22806889 podStartE2EDuration="2m13.22806889s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.220866288 +0000 UTC m=+154.873008719" watchObservedRunningTime="2025-11-25 14:55:22.22806889 +0000 UTC m=+154.880211301" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.262391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.263412 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-6j244" podStartSLOduration=134.263391976 podStartE2EDuration="2m14.263391976s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.26286166 +0000 UTC m=+154.915004141" watchObservedRunningTime="2025-11-25 14:55:22.263391976 +0000 UTC m=+154.915534387" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.265549 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.765531518 +0000 UTC m=+155.417674019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.266296 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:22 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:22 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:22 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.266350 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.338088 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" podStartSLOduration=133.338070526 podStartE2EDuration="2m13.338070526s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.302435771 +0000 UTC m=+154.954578192" watchObservedRunningTime="2025-11-25 14:55:22.338070526 +0000 UTC m=+154.990212937" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.364113 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.364234 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.864206872 +0000 UTC m=+155.516349313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.364500 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.364784 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.864775109 +0000 UTC m=+155.516917520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.381974 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" podStartSLOduration=133.381950063 podStartE2EDuration="2m13.381950063s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.341502637 +0000 UTC m=+154.993645048" watchObservedRunningTime="2025-11-25 14:55:22.381950063 +0000 UTC m=+155.034092494" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.383971 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-zf4ph" podStartSLOduration=133.383958362 podStartE2EDuration="2m13.383958362s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.379586794 +0000 UTC m=+155.031729205" watchObservedRunningTime="2025-11-25 14:55:22.383958362 +0000 UTC m=+155.036100773" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.425692 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lgjgk" podStartSLOduration=133.425673335 podStartE2EDuration="2m13.425673335s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.424933213 +0000 UTC m=+155.077075634" watchObservedRunningTime="2025-11-25 14:55:22.425673335 +0000 UTC m=+155.077815746" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.461249 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" podStartSLOduration=134.461232998 podStartE2EDuration="2m14.461232998s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.457905381 +0000 UTC m=+155.110047792" watchObservedRunningTime="2025-11-25 14:55:22.461232998 +0000 UTC m=+155.113375409" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.465465 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.465646 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.965616427 +0000 UTC m=+155.617758838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.467467 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.467532 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:22.967512542 +0000 UTC m=+155.619654953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.500962 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" podStartSLOduration=133.500936413 podStartE2EDuration="2m13.500936413s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.497932934 +0000 UTC m=+155.150075345" watchObservedRunningTime="2025-11-25 14:55:22.500936413 +0000 UTC m=+155.153078814" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.540353 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" podStartSLOduration=134.540334248 podStartE2EDuration="2m14.540334248s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.53973624 +0000 UTC m=+155.191878651" watchObservedRunningTime="2025-11-25 14:55:22.540334248 +0000 UTC m=+155.192476659" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.568833 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.569250 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.069234456 +0000 UTC m=+155.721376867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.580746 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-6hqx6" podStartSLOduration=133.580729753 podStartE2EDuration="2m13.580729753s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.577609121 +0000 UTC m=+155.229751532" watchObservedRunningTime="2025-11-25 14:55:22.580729753 +0000 UTC m=+155.232872164" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.623529 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-fhvbk" podStartSLOduration=133.623509687 podStartE2EDuration="2m13.623509687s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.621428286 +0000 UTC m=+155.273570697" watchObservedRunningTime="2025-11-25 14:55:22.623509687 +0000 UTC m=+155.275652098" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.657081 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-h4m8m" podStartSLOduration=133.657056641 podStartE2EDuration="2m13.657056641s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:22.655358692 +0000 UTC m=+155.307501113" watchObservedRunningTime="2025-11-25 14:55:22.657056641 +0000 UTC m=+155.309199052" Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.673178 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.673487 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.173474063 +0000 UTC m=+155.825616474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.774146 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.774323 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.274278469 +0000 UTC m=+155.926420880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.774849 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.775213 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.275201286 +0000 UTC m=+155.927343697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.876015 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.876270 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.376215369 +0000 UTC m=+156.028357790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:22 crc kubenswrapper[4806]: I1125 14:55:22.977583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:22 crc kubenswrapper[4806]: E1125 14:55:22.977995 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.477976904 +0000 UTC m=+156.130119315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.045737 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:23 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:23 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:23 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.045804 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.079502 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.079695 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.579649946 +0000 UTC m=+156.231792367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.079932 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.080282 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.580265204 +0000 UTC m=+156.232407615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.181153 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.181371 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.681337628 +0000 UTC m=+156.333480049 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.181462 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.181886 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.681865743 +0000 UTC m=+156.334008144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.188787 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" event={"ID":"17ede0a7-8694-488d-822c-47e76211a19f","Type":"ContainerStarted","Data":"814b56b74e0edd9e997aa03dd99fb621d2ee669085801fcc6b601e763f9f5802"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.189627 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.191715 4806 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tx5m5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.191759 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" podUID="17ede0a7-8694-488d-822c-47e76211a19f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.192680 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" event={"ID":"0aa34022-429c-4bba-91a8-229a7b634a50","Type":"ContainerStarted","Data":"27057626a8d204b5f051c396369b4c414c56fab092c0132bf38b7f195ff010fa"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.195491 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" event={"ID":"0fcbcb3e-8a88-465d-9b1e-8e547844bd93","Type":"ContainerStarted","Data":"308b3e331b834100017fd64311b1ff79e188cc34a19642e59c04311c645ac160"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.199983 4806 generic.go:334] "Generic (PLEG): container finished" podID="40730d61-24e2-4810-89f7-0a34fe204440" containerID="9a8f6cd589f205862a7e49cc81e63180e58505c7b10ffc1a35372ff45df7f585" exitCode=0 Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.200030 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" event={"ID":"40730d61-24e2-4810-89f7-0a34fe204440","Type":"ContainerDied","Data":"9a8f6cd589f205862a7e49cc81e63180e58505c7b10ffc1a35372ff45df7f585"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.201930 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cszqz" event={"ID":"1f1e0355-7806-4025-88f6-992756ffbe86","Type":"ContainerStarted","Data":"2e52620915b7441aa3826fb4b21acf2a45e4c9e6437e7950e658b5ee1e3ff20d"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.201968 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-cszqz" event={"ID":"1f1e0355-7806-4025-88f6-992756ffbe86","Type":"ContainerStarted","Data":"5a601faee8f2ffb18b143dcb03e49abd62e7f3cf809ec89cb27c75225b36f1a8"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.204813 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" event={"ID":"2e32043e-a11b-473b-b42a-ecc01450a942","Type":"ContainerStarted","Data":"feee3ff15fa951978babac7991c003f7072944e84ae95309995281037f935771"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.206719 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" event={"ID":"3ad5dac9-54d3-4435-8f38-77e91d1965e0","Type":"ContainerStarted","Data":"c862a74e164ddf3797af8adce4e871c31fb625dfbc6acbab4bb8dbe562181c13"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.219770 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" podStartSLOduration=134.219746174 podStartE2EDuration="2m14.219746174s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.211508553 +0000 UTC m=+155.863650964" watchObservedRunningTime="2025-11-25 14:55:23.219746174 +0000 UTC m=+155.871888595" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.222181 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" event={"ID":"024b2329-b8db-400c-bbaa-f77ba9a3bdae","Type":"ContainerStarted","Data":"84cdd285cf2d9efbee9d3dd5bca2cdb2b9e196b709216db8f19e2772486cfb3b"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.228600 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" event={"ID":"3f3d083b-5922-4da3-ad9e-e5f323836cba","Type":"ContainerStarted","Data":"00d8042ae81d8432f5e349b545826779e716a2c44efb92e61e032725e656d0fb"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.235661 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" event={"ID":"ce6c946f-c804-4b57-bc37-8169c677e231","Type":"ContainerStarted","Data":"42818231156cc85af7b7a68055141e15e7f1f3827b68f663c0df2bc60c98ae37"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.236622 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.259553 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" event={"ID":"3a93da81-98cb-4a53-9c02-60cc144ebf9d","Type":"ContainerStarted","Data":"0ef14f20e326ca0b1f06c773a511af1ba97b4e183e61f8a6781cb57c8bae525f"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.285926 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.290932 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" podStartSLOduration=135.290919282 podStartE2EDuration="2m15.290919282s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.290492439 +0000 UTC m=+155.942634850" watchObservedRunningTime="2025-11-25 14:55:23.290919282 +0000 UTC m=+155.943061683" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.291163 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.791146178 +0000 UTC m=+156.443288599 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.291751 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j4l9j" podStartSLOduration=134.291744406 podStartE2EDuration="2m14.291744406s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.248095476 +0000 UTC m=+155.900237897" watchObservedRunningTime="2025-11-25 14:55:23.291744406 +0000 UTC m=+155.943886817" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.293004 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" event={"ID":"16a8fa04-87f4-46fa-a310-aa62275684c0","Type":"ContainerStarted","Data":"9d74770c46e73b7322a76a112ab761e712461dc61d7e03c597006e29fe8a932d"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.298969 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.298990 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.305178 4806 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g6w68 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.30:8443/livez\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.305281 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" podUID="3a93da81-98cb-4a53-9c02-60cc144ebf9d" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/livez\": dial tcp 10.217.0.30:8443: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.311275 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" event={"ID":"f394b01a-b495-4acf-bca9-0b23347a3358","Type":"ContainerStarted","Data":"ea18443a678a134a383575fc81fce5d70dd45b8537ce2ab65b3da3bd64bb8902"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.321527 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" event={"ID":"b43d27a6-a9d7-484a-a8d4-f12e06bce31f","Type":"ContainerStarted","Data":"f9bf6bb3d992116897ff4726bd455802d37d57ee90214a7cda683f607249a002"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.341618 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8t729" event={"ID":"d58b6685-ca1a-4f73-a821-f5c4c37264ec","Type":"ContainerStarted","Data":"3f790fa2c6417a14d6f591bca931c48bd0ff054860e1ae569ca060608bc6c097"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.356273 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-n727s" podStartSLOduration=134.356237198 podStartE2EDuration="2m14.356237198s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.354015172 +0000 UTC m=+156.006157593" watchObservedRunningTime="2025-11-25 14:55:23.356237198 +0000 UTC m=+156.008379609" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.379088 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" event={"ID":"1531828a-4e80-4d77-92c0-99e9ae888fae","Type":"ContainerStarted","Data":"3e6d82a8996ef98fd5adbd16151873c9488d25d2b9bbe6f33479a30a78efe122"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.379137 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" event={"ID":"1531828a-4e80-4d77-92c0-99e9ae888fae","Type":"ContainerStarted","Data":"b491ace7e8be11a92c2a40f6647aa881f10d3452d70c728e6829f9bde3fbf8bf"} Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.381105 4806 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-28dbr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.381153 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" podUID="2fe464df-b275-4f86-8750-6052a803b024" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.384224 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.388585 4806 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k8p4x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.388649 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.399478 4806 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gm728 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.399533 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.399611 4806 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p5tx2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.399627 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.400227 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.402606 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:23.902593637 +0000 UTC m=+156.554736048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.421187 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-grv4v" podStartSLOduration=134.421166812 podStartE2EDuration="2m14.421166812s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.42076825 +0000 UTC m=+156.072910671" watchObservedRunningTime="2025-11-25 14:55:23.421166812 +0000 UTC m=+156.073309223" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.421961 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gjw2g" podStartSLOduration=135.421954475 podStartE2EDuration="2m15.421954475s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.392052658 +0000 UTC m=+156.044195069" watchObservedRunningTime="2025-11-25 14:55:23.421954475 +0000 UTC m=+156.074096886" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.449648 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8t729" podStartSLOduration=7.449633337 podStartE2EDuration="7.449633337s" podCreationTimestamp="2025-11-25 14:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.448219175 +0000 UTC m=+156.100361586" watchObservedRunningTime="2025-11-25 14:55:23.449633337 +0000 UTC m=+156.101775748" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.481730 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5ppwt" podStartSLOduration=134.481713488 podStartE2EDuration="2m14.481713488s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.48110417 +0000 UTC m=+156.133246591" watchObservedRunningTime="2025-11-25 14:55:23.481713488 +0000 UTC m=+156.133855889" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.502523 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.506659 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.006630198 +0000 UTC m=+156.658772609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.530042 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-9tjs2" podStartSLOduration=134.530024104 podStartE2EDuration="2m14.530024104s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.528194301 +0000 UTC m=+156.180336712" watchObservedRunningTime="2025-11-25 14:55:23.530024104 +0000 UTC m=+156.182166515" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.584157 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" podStartSLOduration=134.584140162 podStartE2EDuration="2m14.584140162s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.581674669 +0000 UTC m=+156.233817080" watchObservedRunningTime="2025-11-25 14:55:23.584140162 +0000 UTC m=+156.236282563" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.604024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.604391 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.104379765 +0000 UTC m=+156.756522176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.633467 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" podStartSLOduration=135.633446388 podStartE2EDuration="2m15.633446388s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.626629058 +0000 UTC m=+156.278771479" watchObservedRunningTime="2025-11-25 14:55:23.633446388 +0000 UTC m=+156.285588809" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.658448 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2nrmh" podStartSLOduration=134.65842975 podStartE2EDuration="2m14.65842975s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.654008861 +0000 UTC m=+156.306151282" watchObservedRunningTime="2025-11-25 14:55:23.65842975 +0000 UTC m=+156.310572161" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.705071 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.705341 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.205282585 +0000 UTC m=+156.857425006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.705741 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.706059 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.206046937 +0000 UTC m=+156.858189348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.720745 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" podStartSLOduration=134.720728538 podStartE2EDuration="2m14.720728538s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.719613825 +0000 UTC m=+156.371756256" watchObservedRunningTime="2025-11-25 14:55:23.720728538 +0000 UTC m=+156.372870949" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.747305 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-vj65b" podStartSLOduration=134.747284506 podStartE2EDuration="2m14.747284506s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.744765893 +0000 UTC m=+156.396908324" watchObservedRunningTime="2025-11-25 14:55:23.747284506 +0000 UTC m=+156.399426917" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.769901 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7jcqc" podStartSLOduration=134.769885009 podStartE2EDuration="2m14.769885009s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.769139447 +0000 UTC m=+156.421281858" watchObservedRunningTime="2025-11-25 14:55:23.769885009 +0000 UTC m=+156.422027420" Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.807758 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.808194 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.308173722 +0000 UTC m=+156.960316133 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:23 crc kubenswrapper[4806]: I1125 14:55:23.909857 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:23 crc kubenswrapper[4806]: E1125 14:55:23.910398 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.41038196 +0000 UTC m=+157.062524371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.011277 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.011484 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.511448474 +0000 UTC m=+157.163590885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.011782 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.012133 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.512124844 +0000 UTC m=+157.164267255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.045212 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:24 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:24 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:24 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.045270 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.115496 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.116009 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.61598649 +0000 UTC m=+157.268128901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.217079 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.217611 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.71758775 +0000 UTC m=+157.369730231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.318326 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.318709 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.818678105 +0000 UTC m=+157.470820516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.386187 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.387740 4806 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hcfmr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.387803 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" podUID="be0fd1be-42ae-4954-99f6-14807b522398" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.388045 4806 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hcfmr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.388070 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" podUID="be0fd1be-42ae-4954-99f6-14807b522398" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.388574 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" event={"ID":"40730d61-24e2-4810-89f7-0a34fe204440","Type":"ContainerStarted","Data":"c2d7e5ddd591d64b50408b992ec930fa9d0a0010e537249d26fda6384afcc533"} Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389270 4806 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-gm728 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389301 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389538 4806 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-k8p4x container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389560 4806 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-tx5m5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389588 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.13:8443/healthz\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389644 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" podUID="17ede0a7-8694-488d-822c-47e76211a19f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389892 4806 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-28dbr container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.389953 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" podUID="2fe464df-b275-4f86-8750-6052a803b024" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.390067 4806 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-hcfmr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.390134 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" podUID="be0fd1be-42ae-4954-99f6-14807b522398" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.391178 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.417279 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ptx4l" podStartSLOduration=135.417263626 podStartE2EDuration="2m15.417263626s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:23.800184198 +0000 UTC m=+156.452326629" watchObservedRunningTime="2025-11-25 14:55:24.417263626 +0000 UTC m=+157.069406037" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.418838 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-cszqz" podStartSLOduration=8.418830232 podStartE2EDuration="8.418830232s" podCreationTimestamp="2025-11-25 14:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:24.414785643 +0000 UTC m=+157.066928054" watchObservedRunningTime="2025-11-25 14:55:24.418830232 +0000 UTC m=+157.070972643" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.421542 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.422056 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:24.922038726 +0000 UTC m=+157.574181137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.442878 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" podStartSLOduration=135.442857237 podStartE2EDuration="2m15.442857237s" podCreationTimestamp="2025-11-25 14:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:24.441176987 +0000 UTC m=+157.093319418" watchObservedRunningTime="2025-11-25 14:55:24.442857237 +0000 UTC m=+157.094999648" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.522651 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.522891 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.022856603 +0000 UTC m=+157.674999014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.523224 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.528748 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.028729195 +0000 UTC m=+157.680871596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.555660 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.555996 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.560462 4806 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-mvkmg container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.32:8443/livez\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.560521 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" podUID="40730d61-24e2-4810-89f7-0a34fe204440" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.32:8443/livez\": dial tcp 10.217.0.32:8443: connect: connection refused" Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.624649 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.624842 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.124808163 +0000 UTC m=+157.776950574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.624988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.625283 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.125275327 +0000 UTC m=+157.777417728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.726054 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.726221 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.226186486 +0000 UTC m=+157.878328897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.726445 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.726836 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.226820825 +0000 UTC m=+157.878963236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.828327 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.828511 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.328484287 +0000 UTC m=+157.980626688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.828575 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.828890 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.328877158 +0000 UTC m=+157.981019559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.930633 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.930828 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.430776697 +0000 UTC m=+158.082919118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:24 crc kubenswrapper[4806]: I1125 14:55:24.930893 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:24 crc kubenswrapper[4806]: E1125 14:55:24.931321 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.431276721 +0000 UTC m=+158.083419202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.032212 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.032489 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.532452369 +0000 UTC m=+158.184594780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.032796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.033182 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.53316705 +0000 UTC m=+158.185309461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.044993 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:25 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:25 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:25 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.045084 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.134326 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.134498 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.634468601 +0000 UTC m=+158.286611012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.134604 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.134962 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.634954415 +0000 UTC m=+158.287096826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.235304 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.235508 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.735456583 +0000 UTC m=+158.387598994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.235556 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.236030 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.736002719 +0000 UTC m=+158.388145210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.337179 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.337395 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.837359011 +0000 UTC m=+158.489501432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.337459 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.337825 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.837807584 +0000 UTC m=+158.489949996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.394393 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" event={"ID":"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee","Type":"ContainerStarted","Data":"88d0a649f588466ef141cc01b9604a13741dc0c45182628e1e69d5a658ea47f6"} Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.438427 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.438636 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.938597411 +0000 UTC m=+158.590739832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.438968 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.439383 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:25.939367053 +0000 UTC m=+158.591509464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.540069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.540292 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.040258001 +0000 UTC m=+158.692400412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.540566 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.541261 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.041220709 +0000 UTC m=+158.693363190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.642452 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.642877 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.14285828 +0000 UTC m=+158.795000691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.744478 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.744944 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.244925874 +0000 UTC m=+158.897068295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.845549 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.845999 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.345977857 +0000 UTC m=+158.998120268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:25 crc kubenswrapper[4806]: I1125 14:55:25.947534 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:25 crc kubenswrapper[4806]: E1125 14:55:25.948001 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.447984339 +0000 UTC m=+159.100126750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.048399 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.048575 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.548534108 +0000 UTC m=+159.200676519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.048760 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.049103 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.549088514 +0000 UTC m=+159.201230925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.051246 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:26 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:26 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:26 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.051296 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.150297 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.150514 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.650478358 +0000 UTC m=+159.302620769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.150964 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.151390 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.651379975 +0000 UTC m=+159.303522456 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.252514 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.253131 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.753092978 +0000 UTC m=+159.405235389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.354406 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.354782 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.85476704 +0000 UTC m=+159.506909461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.455744 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.456548 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:26.956527024 +0000 UTC m=+159.608669445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.545696 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.546745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.549112 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.557094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.557584 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.057562897 +0000 UTC m=+159.709705318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.569234 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.658606 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.658798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257pp\" (UniqueName: \"kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.658874 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.158823317 +0000 UTC m=+159.810965748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.658945 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.659293 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.159282301 +0000 UTC m=+159.811424712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.659589 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.659639 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.744883 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.746058 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: W1125 14:55:26.747469 4806 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: secrets "community-operators-dockercfg-dmngl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.747522 4806 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"community-operators-dockercfg-dmngl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.755485 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.760341 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.760535 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.260503269 +0000 UTC m=+159.912645680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.760623 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.760685 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.760731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.760803 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257pp\" (UniqueName: \"kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.760981 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.260970063 +0000 UTC m=+159.913112484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.761079 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.761161 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.798375 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257pp\" (UniqueName: \"kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp\") pod \"certified-operators-sxhr5\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.861457 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.861785 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.861854 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thqrn\" (UniqueName: \"kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.861884 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.862017 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.361996036 +0000 UTC m=+160.014138457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.865475 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.953457 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.954821 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.962990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.963130 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.963191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thqrn\" (UniqueName: \"kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.963220 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.963631 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.963738 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:26 crc kubenswrapper[4806]: E1125 14:55:26.963927 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.463912965 +0000 UTC m=+160.116055376 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:26 crc kubenswrapper[4806]: I1125 14:55:26.971641 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.015303 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thqrn\" (UniqueName: \"kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn\") pod \"community-operators-g5jl6\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.050654 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:27 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:27 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:27 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.050746 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.093976 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.094300 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.094377 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.094475 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrxc\" (UniqueName: \"kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.094605 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.594583688 +0000 UTC m=+160.246726109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.153521 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.154872 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.184626 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.195710 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrxc\" (UniqueName: \"kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.195766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.195798 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.195818 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.196143 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.696118225 +0000 UTC m=+160.348260636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.197203 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.197545 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.228491 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrxc\" (UniqueName: \"kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc\") pod \"certified-operators-tlvq8\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.272673 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.297305 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.297582 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz6vg\" (UniqueName: \"kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.297682 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.297726 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.297883 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.797845749 +0000 UTC m=+160.449988160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.350360 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:55:27 crc kubenswrapper[4806]: W1125 14:55:27.369913 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87e6bf46_e6fe_4b9f_abbc_d6cb7c682b27.slice/crio-05376d784e0fe097057cd7d1950158740a1053ff72b88cf11401c13960a2f395 WatchSource:0}: Error finding container 05376d784e0fe097057cd7d1950158740a1053ff72b88cf11401c13960a2f395: Status 404 returned error can't find the container with id 05376d784e0fe097057cd7d1950158740a1053ff72b88cf11401c13960a2f395 Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.393741 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-hcfmr" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.402051 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.402097 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.402122 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.402165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wz6vg\" (UniqueName: \"kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.402680 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:27.902669703 +0000 UTC m=+160.554812114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.403159 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.403394 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.410339 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerStarted","Data":"05376d784e0fe097057cd7d1950158740a1053ff72b88cf11401c13960a2f395"} Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.478711 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wz6vg\" (UniqueName: \"kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg\") pod \"community-operators-jw8vn\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.504247 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.505753 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.005708855 +0000 UTC m=+160.657851276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.610383 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.610858 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.110838609 +0000 UTC m=+160.762981020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.711590 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.712022 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.211999566 +0000 UTC m=+160.864141987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.761653 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:55:27 crc kubenswrapper[4806]: W1125 14:55:27.784565 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c38c71a_804c_42db_a65a_70b5fbe67b87.slice/crio-74ca13dd7d6443ccc68fb6e18590d694eca512cd5c4c86eea2ce3e0dc988f3ea WatchSource:0}: Error finding container 74ca13dd7d6443ccc68fb6e18590d694eca512cd5c4c86eea2ce3e0dc988f3ea: Status 404 returned error can't find the container with id 74ca13dd7d6443ccc68fb6e18590d694eca512cd5c4c86eea2ce3e0dc988f3ea Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.813684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.814066 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.314051259 +0000 UTC m=+160.966193670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.914779 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.915004 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.414971059 +0000 UTC m=+161.067113480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:27 crc kubenswrapper[4806]: I1125 14:55:27.915369 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:27 crc kubenswrapper[4806]: E1125 14:55:27.915713 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.41568666 +0000 UTC m=+161.067829071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.016223 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.016426 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.516401363 +0000 UTC m=+161.168543774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.016692 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.017040 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.517031022 +0000 UTC m=+161.169173433 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.046051 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:28 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:28 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:28 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.046136 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.067307 4806 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/community-operators-g5jl6" secret="" err="failed to sync secret cache: timed out waiting for the condition" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.067496 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.119248 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.119667 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.619648961 +0000 UTC m=+161.271791372 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.220863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.222013 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.721990913 +0000 UTC m=+161.374133324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.263716 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.272304 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.322107 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.322512 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.82248165 +0000 UTC m=+161.474624061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.338155 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.342148 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.346740 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.347536 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.347708 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.423905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.423959 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.424000 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.424347 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.924303647 +0000 UTC m=+161.576446058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.427086 4806 generic.go:334] "Generic (PLEG): container finished" podID="eeac792f-d07c-446b-8dee-00f726ea273c" containerID="634d1250bfff81468d7902be16ec50a49c8d117c5155faaca6bf158cdb440fdf" exitCode=0 Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.427190 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" event={"ID":"eeac792f-d07c-446b-8dee-00f726ea273c","Type":"ContainerDied","Data":"634d1250bfff81468d7902be16ec50a49c8d117c5155faaca6bf158cdb440fdf"} Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.435925 4806 patch_prober.go:28] interesting pod/apiserver-76f77b778f-g6w68 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]log ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]etcd ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/max-in-flight-filter ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 14:55:28 crc kubenswrapper[4806]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 14:55:28 crc kubenswrapper[4806]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/openshift.io-startinformers ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 14:55:28 crc kubenswrapper[4806]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 14:55:28 crc kubenswrapper[4806]: livez check failed Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.435980 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" podUID="3a93da81-98cb-4a53-9c02-60cc144ebf9d" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.442960 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.447737 4806 generic.go:334] "Generic (PLEG): container finished" podID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerID="392aff047572a2ca4e6c5918d1c574e433adfeb2d3504f8312731dc9433c3276" exitCode=0 Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.447863 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerDied","Data":"392aff047572a2ca4e6c5918d1c574e433adfeb2d3504f8312731dc9433c3276"} Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.447900 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerStarted","Data":"74ca13dd7d6443ccc68fb6e18590d694eca512cd5c4c86eea2ce3e0dc988f3ea"} Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.452585 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.457895 4806 generic.go:334] "Generic (PLEG): container finished" podID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerID="560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc" exitCode=0 Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.457961 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerDied","Data":"560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc"} Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.491087 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.547664 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.548675 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.548800 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.551160 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.051134756 +0000 UTC m=+161.703277167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.552734 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.601081 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.660005 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.660625 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.160603027 +0000 UTC m=+161.812745488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.670402 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.761118 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.761574 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.261551808 +0000 UTC m=+161.913694219 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.775837 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.777397 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.782468 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.794233 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.849064 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.864424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.864804 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.364791016 +0000 UTC m=+162.016933427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.879974 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.880025 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.880418 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.880436 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.896950 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-p957m" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.966031 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.966285 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.966347 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f47j\" (UniqueName: \"kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:28 crc kubenswrapper[4806]: I1125 14:55:28.966394 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:28 crc kubenswrapper[4806]: E1125 14:55:28.966578 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.46655321 +0000 UTC m=+162.118695621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.041595 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.044728 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:29 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:29 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:29 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.044790 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.067824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.067881 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f47j\" (UniqueName: \"kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.067915 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.067945 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.069284 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.070024 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.072190 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.572168538 +0000 UTC m=+162.224310949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.094471 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.122540 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-9shgk" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.132022 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-28dbr" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.151365 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f47j\" (UniqueName: \"kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j\") pod \"redhat-marketplace-7jdkl\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.169835 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.171385 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.671360516 +0000 UTC m=+162.323502927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.177423 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.179071 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.191212 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.242255 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.278840 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.279225 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.779212329 +0000 UTC m=+162.431354740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.283416 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.283462 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.303211 4806 patch_prober.go:28] interesting pod/console-f9d7485db-6j244 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.303328 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6j244" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.346089 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.384008 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.384104 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.884083795 +0000 UTC m=+162.536226206 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.384643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.384721 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfsn\" (UniqueName: \"kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.384749 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.384839 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.386453 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.886437564 +0000 UTC m=+162.538579975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.435675 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.477100 4806 generic.go:334] "Generic (PLEG): container finished" podID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerID="ed09a9670684f7d57a0ac5a70399ce2ef87dcd52abe0de33a33a43b5dd2b9c0f" exitCode=0 Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.477550 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerDied","Data":"ed09a9670684f7d57a0ac5a70399ce2ef87dcd52abe0de33a33a43b5dd2b9c0f"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.477774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerStarted","Data":"7cf83a056d87601e5871af69595576a503778e812dae73829db0db246326b07a"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.484082 4806 generic.go:334] "Generic (PLEG): container finished" podID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerID="3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338" exitCode=0 Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.484183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerDied","Data":"3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.484224 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerStarted","Data":"28961cc4e4b1043cabfdff98a3a51ad04c973e03f000dc31513af3ea628fd506"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.485629 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.486139 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.986121507 +0000 UTC m=+162.638263918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.486236 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.486264 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtfsn\" (UniqueName: \"kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.486284 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.486334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.486741 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.487355 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:29.987338003 +0000 UTC m=+162.639480484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.487567 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.491702 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" event={"ID":"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee","Type":"ContainerStarted","Data":"f33e574bbb7bb064e0c4f51fe5759a4ba03587b93c2415f966b4d4d14c9a0dc4"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.493983 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c5d1ff40-c1d0-4be1-95ec-7da15553481f","Type":"ContainerStarted","Data":"43897a5eb3389b9616c32442303042b8a5a7dd9f4289117606fdceeb431313e6"} Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.538452 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.540883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtfsn\" (UniqueName: \"kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn\") pod \"redhat-marketplace-qb75t\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.579821 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.593042 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mvkmg" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.596958 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.597873 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.097844974 +0000 UTC m=+162.749987385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.674650 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tx5m5" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.698990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.701517 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.201495154 +0000 UTC m=+162.853637575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.796178 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.802027 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.802703 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.302680092 +0000 UTC m=+162.954822503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.808108 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.811784 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.816732 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.845735 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.856873 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:55:29 crc kubenswrapper[4806]: I1125 14:55:29.909291 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:29 crc kubenswrapper[4806]: E1125 14:55:29.909640 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.409629178 +0000 UTC m=+163.061771579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.021385 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.021668 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csg5g\" (UniqueName: \"kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.021724 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.021805 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.023410 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.523362474 +0000 UTC m=+163.175504885 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.048831 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:30 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:30 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:30 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.048921 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.125712 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.125811 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csg5g\" (UniqueName: \"kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.125842 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.125872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.128478 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.128857 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.628840248 +0000 UTC m=+163.280982669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.129697 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.155618 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.158030 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.192356 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csg5g\" (UniqueName: \"kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g\") pod \"redhat-operators-n942l\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.198639 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.228853 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.228975 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.728956454 +0000 UTC m=+163.381098865 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.228997 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.229021 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.229056 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.229087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrdr\" (UniqueName: \"kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.229492 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.729481999 +0000 UTC m=+163.381624410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.244171 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.269699 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.270533 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.291733 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:55:30 crc kubenswrapper[4806]: W1125 14:55:30.328419 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f706e15_3a27_484b_a558_c04a6897571b.slice/crio-49e1c0bd94494b8fcedfc3709dc29a452ed4d30045a25ed3709d91ce72a6490e WatchSource:0}: Error finding container 49e1c0bd94494b8fcedfc3709dc29a452ed4d30045a25ed3709d91ce72a6490e: Status 404 returned error can't find the container with id 49e1c0bd94494b8fcedfc3709dc29a452ed4d30045a25ed3709d91ce72a6490e Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330178 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.330408 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.830375518 +0000 UTC m=+163.482517939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330502 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume\") pod \"eeac792f-d07c-446b-8dee-00f726ea273c\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330547 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume\") pod \"eeac792f-d07c-446b-8dee-00f726ea273c\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330616 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-277xv\" (UniqueName: \"kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv\") pod \"eeac792f-d07c-446b-8dee-00f726ea273c\" (UID: \"eeac792f-d07c-446b-8dee-00f726ea273c\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330933 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.330980 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.331046 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.331090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrdr\" (UniqueName: \"kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.331451 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume" (OuterVolumeSpecName: "config-volume") pod "eeac792f-d07c-446b-8dee-00f726ea273c" (UID: "eeac792f-d07c-446b-8dee-00f726ea273c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.331906 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.332003 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.332382 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.832310215 +0000 UTC m=+163.484452706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.337790 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv" (OuterVolumeSpecName: "kube-api-access-277xv") pod "eeac792f-d07c-446b-8dee-00f726ea273c" (UID: "eeac792f-d07c-446b-8dee-00f726ea273c"). InnerVolumeSpecName "kube-api-access-277xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.341589 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eeac792f-d07c-446b-8dee-00f726ea273c" (UID: "eeac792f-d07c-446b-8dee-00f726ea273c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.341922 4806 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.352970 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrdr\" (UniqueName: \"kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr\") pod \"redhat-operators-kfmmb\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.431961 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.931941317 +0000 UTC m=+163.584083728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.431884 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.432203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.432447 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eeac792f-d07c-446b-8dee-00f726ea273c-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.432459 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eeac792f-d07c-446b-8dee-00f726ea273c-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.432469 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-277xv\" (UniqueName: \"kubernetes.io/projected/eeac792f-d07c-446b-8dee-00f726ea273c-kube-api-access-277xv\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.432538 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:55:30.932531864 +0000 UTC m=+163.584674275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-576cp" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.436143 4806 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T14:55:30.341942468Z","Handler":null,"Name":""} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.441403 4806 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.441443 4806 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.519858 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.533365 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.541287 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.549248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" event={"ID":"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee","Type":"ContainerStarted","Data":"8e7f53f04b899937b7c8a9645485c7408906e9524db0545224b7487d51152994"} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.554090 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerStarted","Data":"e0b1ee3d49239619203271499da2179148ad1925ed654ea149f6affa68e88fbd"} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.574135 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerStarted","Data":"49e1c0bd94494b8fcedfc3709dc29a452ed4d30045a25ed3709d91ce72a6490e"} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.585040 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:55:30 crc kubenswrapper[4806]: E1125 14:55:30.585881 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeac792f-d07c-446b-8dee-00f726ea273c" containerName="collect-profiles" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.585902 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeac792f-d07c-446b-8dee-00f726ea273c" containerName="collect-profiles" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.586066 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeac792f-d07c-446b-8dee-00f726ea273c" containerName="collect-profiles" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.586688 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.588892 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.589213 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.625890 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.631498 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.631583 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.631608 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c5d1ff40-c1d0-4be1-95ec-7da15553481f","Type":"ContainerStarted","Data":"3b9840a56fcf48b54008ddbada5a5c0acaee93599ac4741c24afd6b3aea49382"} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.631635 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4" event={"ID":"eeac792f-d07c-446b-8dee-00f726ea273c","Type":"ContainerDied","Data":"320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9"} Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.631652 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="320da600ad7fe5a80dd6fd88bfc751e9c5c24ec0b9c46205a67fd40caadd2ef9" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.635959 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.646496 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.6464778989999997 podStartE2EDuration="2.646477899s" podCreationTimestamp="2025-11-25 14:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:30.643310376 +0000 UTC m=+163.295452797" watchObservedRunningTime="2025-11-25 14:55:30.646477899 +0000 UTC m=+163.298620310" Nov 25 14:55:30 crc kubenswrapper[4806]: W1125 14:55:30.652597 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24692166_ec81_42ad_9887_f07eb242a4bc.slice/crio-493af9e254ca40e661cf0720bcb4bb7f15d6e418895a360d4aba1a72951d1186 WatchSource:0}: Error finding container 493af9e254ca40e661cf0720bcb4bb7f15d6e418895a360d4aba1a72951d1186: Status 404 returned error can't find the container with id 493af9e254ca40e661cf0720bcb4bb7f15d6e418895a360d4aba1a72951d1186 Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.654820 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.654885 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.735165 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-576cp\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.749074 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.755877 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.755974 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.857557 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.857612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.858052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.881265 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:55:30 crc kubenswrapper[4806]: W1125 14:55:30.891297 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a4ea0eb_5662_4e5b_a20b_7528dcbe5c7e.slice/crio-f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72 WatchSource:0}: Error finding container f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72: Status 404 returned error can't find the container with id f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72 Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.897811 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:30 crc kubenswrapper[4806]: I1125 14:55:30.922938 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.046104 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:31 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:31 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:31 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.046361 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.064302 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.072219 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.073826 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/49e22ad0-2903-4ed0-94ad-40d713f99c9f-metrics-certs\") pod \"network-metrics-daemon-lsrxh\" (UID: \"49e22ad0-2903-4ed0-94ad-40d713f99c9f\") " pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.117708 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lsrxh" Nov 25 14:55:31 crc kubenswrapper[4806]: W1125 14:55:31.141056 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7c6a7c5_103e_4287_8e86_a7dbf2b48daf.slice/crio-5c80833af39e7e12665256aab8990df950e3fa811e931f6fa5009e2ee1a19097 WatchSource:0}: Error finding container 5c80833af39e7e12665256aab8990df950e3fa811e931f6fa5009e2ee1a19097: Status 404 returned error can't find the container with id 5c80833af39e7e12665256aab8990df950e3fa811e931f6fa5009e2ee1a19097 Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.198645 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.391981 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lsrxh"] Nov 25 14:55:31 crc kubenswrapper[4806]: W1125 14:55:31.417095 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49e22ad0_2903_4ed0_94ad_40d713f99c9f.slice/crio-67859cd120ed7ea42678a94b8dfc6fba9cc0ea67a2450b3bf26a01dca33de47a WatchSource:0}: Error finding container 67859cd120ed7ea42678a94b8dfc6fba9cc0ea67a2450b3bf26a01dca33de47a: Status 404 returned error can't find the container with id 67859cd120ed7ea42678a94b8dfc6fba9cc0ea67a2450b3bf26a01dca33de47a Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.643267 4806 generic.go:334] "Generic (PLEG): container finished" podID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerID="89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a" exitCode=0 Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.643470 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerDied","Data":"89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.670962 4806 generic.go:334] "Generic (PLEG): container finished" podID="c5d1ff40-c1d0-4be1-95ec-7da15553481f" containerID="3b9840a56fcf48b54008ddbada5a5c0acaee93599ac4741c24afd6b3aea49382" exitCode=0 Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.671066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c5d1ff40-c1d0-4be1-95ec-7da15553481f","Type":"ContainerDied","Data":"3b9840a56fcf48b54008ddbada5a5c0acaee93599ac4741c24afd6b3aea49382"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.675868 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0fb925-8b3c-493a-9f05-a35a9f7be868","Type":"ContainerStarted","Data":"039057a867d1a4db7e9a58fe81281ee15aa9b894ece519af8767713c0e315612"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.677265 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" event={"ID":"49e22ad0-2903-4ed0-94ad-40d713f99c9f","Type":"ContainerStarted","Data":"67859cd120ed7ea42678a94b8dfc6fba9cc0ea67a2450b3bf26a01dca33de47a"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.683609 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" event={"ID":"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf","Type":"ContainerStarted","Data":"5c80833af39e7e12665256aab8990df950e3fa811e931f6fa5009e2ee1a19097"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.698518 4806 generic.go:334] "Generic (PLEG): container finished" podID="24692166-ec81-42ad-9887-f07eb242a4bc" containerID="c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1" exitCode=0 Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.698647 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerDied","Data":"c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.698679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerStarted","Data":"493af9e254ca40e661cf0720bcb4bb7f15d6e418895a360d4aba1a72951d1186"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.706534 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" event={"ID":"9a86e8c4-3e5a-4fd1-bad8-d314f474c4ee","Type":"ContainerStarted","Data":"3bc8f0ded0a448c949a7622e58094e3cc90eb74b3ec8b7f7f820f566701b4148"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.709234 4806 generic.go:334] "Generic (PLEG): container finished" podID="7f706e15-3a27-484b-a558-c04a6897571b" containerID="ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae" exitCode=0 Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.709273 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerDied","Data":"ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae"} Nov 25 14:55:31 crc kubenswrapper[4806]: I1125 14:55:31.711118 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerStarted","Data":"f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72"} Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.045279 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:32 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:32 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:32 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.045381 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.101524 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.720722 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerID="51bb7e0026bf2504fc166c7414e205aef1eafb8c21ee2adb5286ffdfa4a304b8" exitCode=0 Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.720799 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerDied","Data":"51bb7e0026bf2504fc166c7414e205aef1eafb8c21ee2adb5286ffdfa4a304b8"} Nov 25 14:55:32 crc kubenswrapper[4806]: I1125 14:55:32.965177 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.020617 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access\") pod \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.020754 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir\") pod \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\" (UID: \"c5d1ff40-c1d0-4be1-95ec-7da15553481f\") " Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.021212 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c5d1ff40-c1d0-4be1-95ec-7da15553481f" (UID: "c5d1ff40-c1d0-4be1-95ec-7da15553481f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.023951 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.027258 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5d1ff40-c1d0-4be1-95ec-7da15553481f" (UID: "c5d1ff40-c1d0-4be1-95ec-7da15553481f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.043323 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:33 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:33 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:33 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.043402 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.125713 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5d1ff40-c1d0-4be1-95ec-7da15553481f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.311230 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.317200 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-g6w68" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.730810 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"c5d1ff40-c1d0-4be1-95ec-7da15553481f","Type":"ContainerDied","Data":"43897a5eb3389b9616c32442303042b8a5a7dd9f4289117606fdceeb431313e6"} Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.730848 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.730861 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43897a5eb3389b9616c32442303042b8a5a7dd9f4289117606fdceeb431313e6" Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.732471 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0fb925-8b3c-493a-9f05-a35a9f7be868","Type":"ContainerStarted","Data":"1de2bf73834acdb23b078f12f8fde69acc356ce67eaa089b7281a8f4f74c6fdd"} Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.733751 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" event={"ID":"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf","Type":"ContainerStarted","Data":"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901"} Nov 25 14:55:33 crc kubenswrapper[4806]: I1125 14:55:33.752529 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x92cw" podStartSLOduration=17.752509414 podStartE2EDuration="17.752509414s" podCreationTimestamp="2025-11-25 14:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:33.752197445 +0000 UTC m=+166.404339866" watchObservedRunningTime="2025-11-25 14:55:33.752509414 +0000 UTC m=+166.404651825" Nov 25 14:55:34 crc kubenswrapper[4806]: I1125 14:55:34.043719 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:34 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:34 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:34 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:34 crc kubenswrapper[4806]: I1125 14:55:34.043786 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:34 crc kubenswrapper[4806]: I1125 14:55:34.740927 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" event={"ID":"49e22ad0-2903-4ed0-94ad-40d713f99c9f","Type":"ContainerStarted","Data":"c9f0c2add46a9c43b9760028ec38b03eed328db4311800a59996c8aab3600c71"} Nov 25 14:55:34 crc kubenswrapper[4806]: I1125 14:55:34.761297 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-cszqz" Nov 25 14:55:35 crc kubenswrapper[4806]: I1125 14:55:35.043942 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:35 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:35 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:35 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:35 crc kubenswrapper[4806]: I1125 14:55:35.044039 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:35 crc kubenswrapper[4806]: I1125 14:55:35.748984 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:35 crc kubenswrapper[4806]: I1125 14:55:35.767129 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.76711056 podStartE2EDuration="5.76711056s" podCreationTimestamp="2025-11-25 14:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:35.764204545 +0000 UTC m=+168.416346986" watchObservedRunningTime="2025-11-25 14:55:35.76711056 +0000 UTC m=+168.419252971" Nov 25 14:55:35 crc kubenswrapper[4806]: I1125 14:55:35.791394 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" podStartSLOduration=147.791366831 podStartE2EDuration="2m27.791366831s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:35.786354254 +0000 UTC m=+168.438496745" watchObservedRunningTime="2025-11-25 14:55:35.791366831 +0000 UTC m=+168.443509242" Nov 25 14:55:36 crc kubenswrapper[4806]: I1125 14:55:36.044853 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:36 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:36 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:36 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:36 crc kubenswrapper[4806]: I1125 14:55:36.044943 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:36 crc kubenswrapper[4806]: I1125 14:55:36.764556 4806 generic.go:334] "Generic (PLEG): container finished" podID="2a0fb925-8b3c-493a-9f05-a35a9f7be868" containerID="1de2bf73834acdb23b078f12f8fde69acc356ce67eaa089b7281a8f4f74c6fdd" exitCode=0 Nov 25 14:55:36 crc kubenswrapper[4806]: I1125 14:55:36.764641 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0fb925-8b3c-493a-9f05-a35a9f7be868","Type":"ContainerDied","Data":"1de2bf73834acdb23b078f12f8fde69acc356ce67eaa089b7281a8f4f74c6fdd"} Nov 25 14:55:37 crc kubenswrapper[4806]: I1125 14:55:37.043647 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:37 crc kubenswrapper[4806]: [-]has-synced failed: reason withheld Nov 25 14:55:37 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:37 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:37 crc kubenswrapper[4806]: I1125 14:55:37.043703 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.043709 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:38 crc kubenswrapper[4806]: [+]has-synced ok Nov 25 14:55:38 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:38 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.044288 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.784610 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-gfbwx_3ad5dac9-54d3-4435-8f38-77e91d1965e0/cluster-samples-operator/0.log" Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.784699 4806 generic.go:334] "Generic (PLEG): container finished" podID="3ad5dac9-54d3-4435-8f38-77e91d1965e0" containerID="27e493caa61c5486984173d36a3d09dca4043ef6cb0822cac08fc0bdc2544f34" exitCode=2 Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.784798 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" event={"ID":"3ad5dac9-54d3-4435-8f38-77e91d1965e0","Type":"ContainerDied","Data":"27e493caa61c5486984173d36a3d09dca4043ef6cb0822cac08fc0bdc2544f34"} Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.785403 4806 scope.go:117] "RemoveContainer" containerID="27e493caa61c5486984173d36a3d09dca4043ef6cb0822cac08fc0bdc2544f34" Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.880135 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.880220 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.880253 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:38 crc kubenswrapper[4806]: I1125 14:55:38.880366 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:39 crc kubenswrapper[4806]: I1125 14:55:39.043642 4806 patch_prober.go:28] interesting pod/router-default-5444994796-kfst9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:55:39 crc kubenswrapper[4806]: [+]has-synced ok Nov 25 14:55:39 crc kubenswrapper[4806]: [+]process-running ok Nov 25 14:55:39 crc kubenswrapper[4806]: healthz check failed Nov 25 14:55:39 crc kubenswrapper[4806]: I1125 14:55:39.043709 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kfst9" podUID="4e9e656c-2e2c-4ed4-b720-8fdb639a029d" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:55:39 crc kubenswrapper[4806]: I1125 14:55:39.283887 4806 patch_prober.go:28] interesting pod/console-f9d7485db-6j244 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Nov 25 14:55:39 crc kubenswrapper[4806]: I1125 14:55:39.284428 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-6j244" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.23:8443/health\": dial tcp 10.217.0.23:8443: connect: connection refused" Nov 25 14:55:40 crc kubenswrapper[4806]: I1125 14:55:40.044741 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:40 crc kubenswrapper[4806]: I1125 14:55:40.047015 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kfst9" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.470829 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.612234 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access\") pod \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.612431 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir\") pod \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\" (UID: \"2a0fb925-8b3c-493a-9f05-a35a9f7be868\") " Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.612546 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a0fb925-8b3c-493a-9f05-a35a9f7be868" (UID: "2a0fb925-8b3c-493a-9f05-a35a9f7be868"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.612826 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.620184 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a0fb925-8b3c-493a-9f05-a35a9f7be868" (UID: "2a0fb925-8b3c-493a-9f05-a35a9f7be868"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.714513 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0fb925-8b3c-493a-9f05-a35a9f7be868-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.826046 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2a0fb925-8b3c-493a-9f05-a35a9f7be868","Type":"ContainerDied","Data":"039057a867d1a4db7e9a58fe81281ee15aa9b894ece519af8767713c0e315612"} Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.826095 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="039057a867d1a4db7e9a58fe81281ee15aa9b894ece519af8767713c0e315612" Nov 25 14:55:45 crc kubenswrapper[4806]: I1125 14:55:45.826141 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.879865 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.880464 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.879865 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.880544 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.880561 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.881010 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.881074 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.881098 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1317cc9662172dd91ce8ec60bdaf6b67dfd54aeb20377694855c5de89dfa08ba"} pod="openshift-console/downloads-7954f5f757-xx6dj" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.881175 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" containerID="cri-o://1317cc9662172dd91ce8ec60bdaf6b67dfd54aeb20377694855c5de89dfa08ba" gracePeriod=2 Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.934381 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:55:48 crc kubenswrapper[4806]: I1125 14:55:48.934446 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:55:49 crc kubenswrapper[4806]: I1125 14:55:49.287531 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:49 crc kubenswrapper[4806]: I1125 14:55:49.291296 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-6j244" Nov 25 14:55:50 crc kubenswrapper[4806]: I1125 14:55:50.757070 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 14:55:51 crc kubenswrapper[4806]: I1125 14:55:51.863925 4806 generic.go:334] "Generic (PLEG): container finished" podID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerID="1317cc9662172dd91ce8ec60bdaf6b67dfd54aeb20377694855c5de89dfa08ba" exitCode=0 Nov 25 14:55:51 crc kubenswrapper[4806]: I1125 14:55:51.863973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx6dj" event={"ID":"f9b1a29e-c5b3-45fd-9082-b46293956184","Type":"ContainerDied","Data":"1317cc9662172dd91ce8ec60bdaf6b67dfd54aeb20377694855c5de89dfa08ba"} Nov 25 14:55:54 crc kubenswrapper[4806]: I1125 14:55:54.936543 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:58 crc kubenswrapper[4806]: I1125 14:55:58.880577 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:55:58 crc kubenswrapper[4806]: I1125 14:55:58.880633 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:55:59 crc kubenswrapper[4806]: I1125 14:55:59.449523 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-4s68g" Nov 25 14:56:08 crc kubenswrapper[4806]: I1125 14:56:08.880148 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:08 crc kubenswrapper[4806]: I1125 14:56:08.880658 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:12 crc kubenswrapper[4806]: E1125 14:56:12.629156 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:56:12 crc kubenswrapper[4806]: E1125 14:56:12.629971 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-257pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sxhr5_openshift-marketplace(87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:12 crc kubenswrapper[4806]: E1125 14:56:12.631622 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sxhr5" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.880438 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.881473 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.935118 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.935195 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.935252 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.936089 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:56:18 crc kubenswrapper[4806]: I1125 14:56:18.936163 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d" gracePeriod=600 Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.300256 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sxhr5" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.367700 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.367880 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wz6vg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-jw8vn_openshift-marketplace(9b92c54b-a219-4ef0-998a-e5a2bac20e0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.369059 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-jw8vn" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.371100 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.371305 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-thqrn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g5jl6_openshift-marketplace(a8eb172a-99cc-46c1-9bd2-827dcb3da2c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:19 crc kubenswrapper[4806]: E1125 14:56:19.372963 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-g5jl6" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" Nov 25 14:56:21 crc kubenswrapper[4806]: I1125 14:56:21.023150 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d" exitCode=0 Nov 25 14:56:21 crc kubenswrapper[4806]: I1125 14:56:21.023365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d"} Nov 25 14:56:28 crc kubenswrapper[4806]: I1125 14:56:28.880273 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:28 crc kubenswrapper[4806]: I1125 14:56:28.881027 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:31 crc kubenswrapper[4806]: E1125 14:56:31.607244 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:56:31 crc kubenswrapper[4806]: E1125 14:56:31.607660 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tkrxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-tlvq8_openshift-marketplace(3c38c71a-804c-42db-a65a-70b5fbe67b87): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:31 crc kubenswrapper[4806]: E1125 14:56:31.608894 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-tlvq8" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" Nov 25 14:56:38 crc kubenswrapper[4806]: I1125 14:56:38.879593 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:38 crc kubenswrapper[4806]: I1125 14:56:38.880210 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:46 crc kubenswrapper[4806]: E1125 14:56:46.952710 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:56:46 crc kubenswrapper[4806]: E1125 14:56:46.953512 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wtfsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qb75t_openshift-marketplace(7f706e15-3a27-484b-a558-c04a6897571b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:46 crc kubenswrapper[4806]: E1125 14:56:46.955878 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qb75t" podUID="7f706e15-3a27-484b-a558-c04a6897571b" Nov 25 14:56:48 crc kubenswrapper[4806]: I1125 14:56:48.879132 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:48 crc kubenswrapper[4806]: I1125 14:56:48.879883 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.454098 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.454954 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6f47j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-7jdkl_openshift-marketplace(28fa29ec-8177-41d4-bd11-9398fd0f2aa3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.455003 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qb75t" podUID="7f706e15-3a27-484b-a558-c04a6897571b" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.456492 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-7jdkl" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.473568 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.473794 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-csg5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-n942l_openshift-marketplace(24692166-ec81-42ad-9887-f07eb242a4bc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.474913 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-n942l" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.506605 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.506824 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkrdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-kfmmb_openshift-marketplace(6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:56:50 crc kubenswrapper[4806]: E1125 14:56:50.508241 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-kfmmb" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.221478 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lsrxh" event={"ID":"49e22ad0-2903-4ed0-94ad-40d713f99c9f","Type":"ContainerStarted","Data":"ade540f93b8a8444de97ddd8a6c1ef95702ec361c2cb8052b66d35079b694f02"} Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.225669 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd"} Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.229583 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-gfbwx_3ad5dac9-54d3-4435-8f38-77e91d1965e0/cluster-samples-operator/0.log" Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.229670 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfbwx" event={"ID":"3ad5dac9-54d3-4435-8f38-77e91d1965e0","Type":"ContainerStarted","Data":"b2faa977b5e63fdce70d3375c334bf247bcf643ca53fc154c13b67601cbeab7a"} Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.233489 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xx6dj" event={"ID":"f9b1a29e-c5b3-45fd-9082-b46293956184","Type":"ContainerStarted","Data":"eab1a29e7553457cbe734d5e93e941980ca4649e2dd8df24365251cf62ed69eb"} Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.233589 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.233959 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.234034 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:51 crc kubenswrapper[4806]: E1125 14:56:51.234600 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-7jdkl" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" Nov 25 14:56:51 crc kubenswrapper[4806]: E1125 14:56:51.237064 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-n942l" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" Nov 25 14:56:51 crc kubenswrapper[4806]: I1125 14:56:51.242528 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lsrxh" podStartSLOduration=223.242511475 podStartE2EDuration="3m43.242511475s" podCreationTimestamp="2025-11-25 14:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:51.239998582 +0000 UTC m=+243.892140993" watchObservedRunningTime="2025-11-25 14:56:51.242511475 +0000 UTC m=+243.894653886" Nov 25 14:56:52 crc kubenswrapper[4806]: I1125 14:56:52.239120 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:52 crc kubenswrapper[4806]: I1125 14:56:52.239194 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:53 crc kubenswrapper[4806]: I1125 14:56:53.246871 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:53 crc kubenswrapper[4806]: I1125 14:56:53.247517 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:57 crc kubenswrapper[4806]: I1125 14:56:57.285011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerStarted","Data":"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab"} Nov 25 14:56:57 crc kubenswrapper[4806]: I1125 14:56:57.287155 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerStarted","Data":"5ce22f29756c6167c5e31e3d6f749826f0dd6ba11fe7b599468ce3e69b841ce9"} Nov 25 14:56:57 crc kubenswrapper[4806]: I1125 14:56:57.289557 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerStarted","Data":"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f"} Nov 25 14:56:57 crc kubenswrapper[4806]: I1125 14:56:57.292205 4806 generic.go:334] "Generic (PLEG): container finished" podID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerID="9f272ec869776f39b80fc76441435738b678d2a3f150402a4712f75e3017d91c" exitCode=0 Nov 25 14:56:57 crc kubenswrapper[4806]: I1125 14:56:57.292326 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerDied","Data":"9f272ec869776f39b80fc76441435738b678d2a3f150402a4712f75e3017d91c"} Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.309948 4806 generic.go:334] "Generic (PLEG): container finished" podID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerID="c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab" exitCode=0 Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.310110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerDied","Data":"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab"} Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.313308 4806 generic.go:334] "Generic (PLEG): container finished" podID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerID="5ce22f29756c6167c5e31e3d6f749826f0dd6ba11fe7b599468ce3e69b841ce9" exitCode=0 Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.313644 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerDied","Data":"5ce22f29756c6167c5e31e3d6f749826f0dd6ba11fe7b599468ce3e69b841ce9"} Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.320704 4806 generic.go:334] "Generic (PLEG): container finished" podID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerID="06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f" exitCode=0 Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.320774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerDied","Data":"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f"} Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.880360 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.880454 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.880980 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:56:58 crc kubenswrapper[4806]: I1125 14:56:58.880930 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:57:08 crc kubenswrapper[4806]: I1125 14:57:08.387381 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerStarted","Data":"e17289ec3461f491b038b7608af61a1754031bb18e84fed7af79ffbe03572737"} Nov 25 14:57:08 crc kubenswrapper[4806]: I1125 14:57:08.879196 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:57:08 crc kubenswrapper[4806]: I1125 14:57:08.879268 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:57:08 crc kubenswrapper[4806]: I1125 14:57:08.879286 4806 patch_prober.go:28] interesting pod/downloads-7954f5f757-xx6dj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" start-of-body= Nov 25 14:57:08 crc kubenswrapper[4806]: I1125 14:57:08.879468 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xx6dj" podUID="f9b1a29e-c5b3-45fd-9082-b46293956184" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.10:8080/\": dial tcp 10.217.0.10:8080: connect: connection refused" Nov 25 14:57:09 crc kubenswrapper[4806]: I1125 14:57:09.428026 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tlvq8" podStartSLOduration=5.258636406 podStartE2EDuration="1m43.428000412s" podCreationTimestamp="2025-11-25 14:55:26 +0000 UTC" firstStartedPulling="2025-11-25 14:55:28.452134533 +0000 UTC m=+161.104276944" lastFinishedPulling="2025-11-25 14:57:06.621498539 +0000 UTC m=+259.273640950" observedRunningTime="2025-11-25 14:57:09.427403714 +0000 UTC m=+262.079546175" watchObservedRunningTime="2025-11-25 14:57:09.428000412 +0000 UTC m=+262.080142833" Nov 25 14:57:17 crc kubenswrapper[4806]: I1125 14:57:17.274375 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:17 crc kubenswrapper[4806]: I1125 14:57:17.275398 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:18 crc kubenswrapper[4806]: I1125 14:57:18.230142 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:18 crc kubenswrapper[4806]: I1125 14:57:18.281556 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:18 crc kubenswrapper[4806]: I1125 14:57:18.485463 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:57:18 crc kubenswrapper[4806]: I1125 14:57:18.894259 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xx6dj" Nov 25 14:57:19 crc kubenswrapper[4806]: I1125 14:57:19.455522 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tlvq8" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="registry-server" containerID="cri-o://e17289ec3461f491b038b7608af61a1754031bb18e84fed7af79ffbe03572737" gracePeriod=2 Nov 25 14:57:21 crc kubenswrapper[4806]: I1125 14:57:21.466338 4806 generic.go:334] "Generic (PLEG): container finished" podID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerID="e17289ec3461f491b038b7608af61a1754031bb18e84fed7af79ffbe03572737" exitCode=0 Nov 25 14:57:21 crc kubenswrapper[4806]: I1125 14:57:21.466391 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerDied","Data":"e17289ec3461f491b038b7608af61a1754031bb18e84fed7af79ffbe03572737"} Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.124278 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.194598 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content\") pod \"3c38c71a-804c-42db-a65a-70b5fbe67b87\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.194654 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities\") pod \"3c38c71a-804c-42db-a65a-70b5fbe67b87\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.194680 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrxc\" (UniqueName: \"kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc\") pod \"3c38c71a-804c-42db-a65a-70b5fbe67b87\" (UID: \"3c38c71a-804c-42db-a65a-70b5fbe67b87\") " Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.195950 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities" (OuterVolumeSpecName: "utilities") pod "3c38c71a-804c-42db-a65a-70b5fbe67b87" (UID: "3c38c71a-804c-42db-a65a-70b5fbe67b87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.202865 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc" (OuterVolumeSpecName: "kube-api-access-tkrxc") pod "3c38c71a-804c-42db-a65a-70b5fbe67b87" (UID: "3c38c71a-804c-42db-a65a-70b5fbe67b87"). InnerVolumeSpecName "kube-api-access-tkrxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.263568 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c38c71a-804c-42db-a65a-70b5fbe67b87" (UID: "3c38c71a-804c-42db-a65a-70b5fbe67b87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.296983 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.297028 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38c71a-804c-42db-a65a-70b5fbe67b87-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.297041 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrxc\" (UniqueName: \"kubernetes.io/projected/3c38c71a-804c-42db-a65a-70b5fbe67b87-kube-api-access-tkrxc\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.496538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tlvq8" event={"ID":"3c38c71a-804c-42db-a65a-70b5fbe67b87","Type":"ContainerDied","Data":"74ca13dd7d6443ccc68fb6e18590d694eca512cd5c4c86eea2ce3e0dc988f3ea"} Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.496617 4806 scope.go:117] "RemoveContainer" containerID="e17289ec3461f491b038b7608af61a1754031bb18e84fed7af79ffbe03572737" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.496800 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tlvq8" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.519723 4806 scope.go:117] "RemoveContainer" containerID="9f272ec869776f39b80fc76441435738b678d2a3f150402a4712f75e3017d91c" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.549845 4806 scope.go:117] "RemoveContainer" containerID="392aff047572a2ca4e6c5918d1c574e433adfeb2d3504f8312731dc9433c3276" Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.603407 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:57:24 crc kubenswrapper[4806]: I1125 14:57:24.619926 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tlvq8"] Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.505434 4806 generic.go:334] "Generic (PLEG): container finished" podID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerID="4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8" exitCode=0 Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.505503 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerDied","Data":"4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.508218 4806 generic.go:334] "Generic (PLEG): container finished" podID="7f706e15-3a27-484b-a558-c04a6897571b" containerID="2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220" exitCode=0 Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.508289 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerDied","Data":"2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.519879 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerStarted","Data":"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.523041 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerStarted","Data":"00199322fb159765d21011e638bed2f48a9706745692f7ab3d831ea64d24a3d9"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.525209 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerStarted","Data":"5063de0d7c9b4ccf73a3f112c3a1b0959ef13a629de31a84a9d8349544d9f90e"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.531693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerStarted","Data":"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.535792 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerStarted","Data":"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e"} Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.601198 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g5jl6" podStartSLOduration=5.18200316 podStartE2EDuration="1m59.601168792s" podCreationTimestamp="2025-11-25 14:55:26 +0000 UTC" firstStartedPulling="2025-11-25 14:55:29.488426355 +0000 UTC m=+162.140568766" lastFinishedPulling="2025-11-25 14:57:23.907591987 +0000 UTC m=+276.559734398" observedRunningTime="2025-11-25 14:57:25.599680867 +0000 UTC m=+278.251823288" watchObservedRunningTime="2025-11-25 14:57:25.601168792 +0000 UTC m=+278.253311203" Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.623478 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jw8vn" podStartSLOduration=4.044880967 podStartE2EDuration="1m58.623450467s" podCreationTimestamp="2025-11-25 14:55:27 +0000 UTC" firstStartedPulling="2025-11-25 14:55:29.488467706 +0000 UTC m=+162.140610117" lastFinishedPulling="2025-11-25 14:57:24.067037216 +0000 UTC m=+276.719179617" observedRunningTime="2025-11-25 14:57:25.622273751 +0000 UTC m=+278.274416162" watchObservedRunningTime="2025-11-25 14:57:25.623450467 +0000 UTC m=+278.275592898" Nov 25 14:57:25 crc kubenswrapper[4806]: I1125 14:57:25.646915 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sxhr5" podStartSLOduration=4.099446243 podStartE2EDuration="1m59.646890817s" podCreationTimestamp="2025-11-25 14:55:26 +0000 UTC" firstStartedPulling="2025-11-25 14:55:28.467541055 +0000 UTC m=+161.119683466" lastFinishedPulling="2025-11-25 14:57:24.014985629 +0000 UTC m=+276.667128040" observedRunningTime="2025-11-25 14:57:25.641228435 +0000 UTC m=+278.293370856" watchObservedRunningTime="2025-11-25 14:57:25.646890817 +0000 UTC m=+278.299033228" Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.098666 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" path="/var/lib/kubelet/pods/3c38c71a-804c-42db-a65a-70b5fbe67b87/volumes" Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.545526 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerID="5063de0d7c9b4ccf73a3f112c3a1b0959ef13a629de31a84a9d8349544d9f90e" exitCode=0 Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.545621 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerDied","Data":"5063de0d7c9b4ccf73a3f112c3a1b0959ef13a629de31a84a9d8349544d9f90e"} Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.548694 4806 generic.go:334] "Generic (PLEG): container finished" podID="24692166-ec81-42ad-9887-f07eb242a4bc" containerID="ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e" exitCode=0 Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.548775 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerDied","Data":"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e"} Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.866337 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.866416 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:57:26 crc kubenswrapper[4806]: I1125 14:57:26.915934 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.068125 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.068635 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.115457 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.273517 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.273573 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:28 crc kubenswrapper[4806]: I1125 14:57:28.320903 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:36.920003 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:38.109055 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:38.314868 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:38.362618 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:38.630435 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jw8vn" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="registry-server" containerID="cri-o://00199322fb159765d21011e638bed2f48a9706745692f7ab3d831ea64d24a3d9" gracePeriod=2 Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:42.659921 4806 generic.go:334] "Generic (PLEG): container finished" podID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerID="00199322fb159765d21011e638bed2f48a9706745692f7ab3d831ea64d24a3d9" exitCode=0 Nov 25 14:57:42 crc kubenswrapper[4806]: I1125 14:57:42.660010 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerDied","Data":"00199322fb159765d21011e638bed2f48a9706745692f7ab3d831ea64d24a3d9"} Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.007635 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.074957 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wz6vg\" (UniqueName: \"kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg\") pod \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.075470 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content\") pod \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.075539 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities\") pod \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\" (UID: \"9b92c54b-a219-4ef0-998a-e5a2bac20e0b\") " Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.076400 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities" (OuterVolumeSpecName: "utilities") pod "9b92c54b-a219-4ef0-998a-e5a2bac20e0b" (UID: "9b92c54b-a219-4ef0-998a-e5a2bac20e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.083570 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg" (OuterVolumeSpecName: "kube-api-access-wz6vg") pod "9b92c54b-a219-4ef0-998a-e5a2bac20e0b" (UID: "9b92c54b-a219-4ef0-998a-e5a2bac20e0b"). InnerVolumeSpecName "kube-api-access-wz6vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.133600 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b92c54b-a219-4ef0-998a-e5a2bac20e0b" (UID: "9b92c54b-a219-4ef0-998a-e5a2bac20e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.177728 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.177800 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wz6vg\" (UniqueName: \"kubernetes.io/projected/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-kube-api-access-wz6vg\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.177817 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b92c54b-a219-4ef0-998a-e5a2bac20e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.686272 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jw8vn" event={"ID":"9b92c54b-a219-4ef0-998a-e5a2bac20e0b","Type":"ContainerDied","Data":"7cf83a056d87601e5871af69595576a503778e812dae73829db0db246326b07a"} Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.686357 4806 scope.go:117] "RemoveContainer" containerID="00199322fb159765d21011e638bed2f48a9706745692f7ab3d831ea64d24a3d9" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.686384 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jw8vn" Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.711653 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:57:46 crc kubenswrapper[4806]: I1125 14:57:46.718921 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jw8vn"] Nov 25 14:57:48 crc kubenswrapper[4806]: I1125 14:57:48.097399 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" path="/var/lib/kubelet/pods/9b92c54b-a219-4ef0-998a-e5a2bac20e0b/volumes" Nov 25 14:57:51 crc kubenswrapper[4806]: I1125 14:57:51.210932 4806 scope.go:117] "RemoveContainer" containerID="5ce22f29756c6167c5e31e3d6f749826f0dd6ba11fe7b599468ce3e69b841ce9" Nov 25 14:57:59 crc kubenswrapper[4806]: I1125 14:57:59.110058 4806 scope.go:117] "RemoveContainer" containerID="ed09a9670684f7d57a0ac5a70399ce2ef87dcd52abe0de33a33a43b5dd2b9c0f" Nov 25 14:58:02 crc kubenswrapper[4806]: I1125 14:58:02.814138 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerStarted","Data":"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0"} Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.826297 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerStarted","Data":"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7"} Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.832705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerStarted","Data":"9bfe52571bb7dcef99eb4d1d1673024d91dd8c898b5e8704517009fb6af13339"} Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.837475 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerStarted","Data":"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846"} Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.857418 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qb75t" podStartSLOduration=16.278672046 podStartE2EDuration="2m34.85739524s" podCreationTimestamp="2025-11-25 14:55:29 +0000 UTC" firstStartedPulling="2025-11-25 14:55:31.710620069 +0000 UTC m=+164.362762480" lastFinishedPulling="2025-11-25 14:57:50.289343233 +0000 UTC m=+302.941485674" observedRunningTime="2025-11-25 14:58:03.85278288 +0000 UTC m=+316.504925311" watchObservedRunningTime="2025-11-25 14:58:03.85739524 +0000 UTC m=+316.509537651" Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.877473 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n942l" podStartSLOduration=8.490215199 podStartE2EDuration="2m34.877444197s" podCreationTimestamp="2025-11-25 14:55:29 +0000 UTC" firstStartedPulling="2025-11-25 14:55:32.723279418 +0000 UTC m=+165.375421829" lastFinishedPulling="2025-11-25 14:57:59.110508416 +0000 UTC m=+311.762650827" observedRunningTime="2025-11-25 14:58:03.874991523 +0000 UTC m=+316.527133934" watchObservedRunningTime="2025-11-25 14:58:03.877444197 +0000 UTC m=+316.529586608" Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.895402 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kfmmb" podStartSLOduration=8.520877094 podStartE2EDuration="2m33.89538513s" podCreationTimestamp="2025-11-25 14:55:30 +0000 UTC" firstStartedPulling="2025-11-25 14:55:33.736039561 +0000 UTC m=+166.388181992" lastFinishedPulling="2025-11-25 14:57:59.110547587 +0000 UTC m=+311.762690028" observedRunningTime="2025-11-25 14:58:03.895066141 +0000 UTC m=+316.547208562" watchObservedRunningTime="2025-11-25 14:58:03.89538513 +0000 UTC m=+316.547527541" Nov 25 14:58:03 crc kubenswrapper[4806]: I1125 14:58:03.920038 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7jdkl" podStartSLOduration=8.45437912 podStartE2EDuration="2m35.920009776s" podCreationTimestamp="2025-11-25 14:55:28 +0000 UTC" firstStartedPulling="2025-11-25 14:55:31.645401886 +0000 UTC m=+164.297544297" lastFinishedPulling="2025-11-25 14:57:59.111032542 +0000 UTC m=+311.763174953" observedRunningTime="2025-11-25 14:58:03.916035956 +0000 UTC m=+316.568178387" watchObservedRunningTime="2025-11-25 14:58:03.920009776 +0000 UTC m=+316.572152197" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.436429 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.436973 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.483878 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.818918 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.819015 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.859930 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.920701 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:09 crc kubenswrapper[4806]: I1125 14:58:09.932603 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.271200 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.271714 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.313464 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.520746 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.520840 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.564500 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.918331 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.926344 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:10 crc kubenswrapper[4806]: I1125 14:58:10.932285 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:58:11 crc kubenswrapper[4806]: I1125 14:58:11.892184 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qb75t" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="registry-server" containerID="cri-o://686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7" gracePeriod=2 Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.719628 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.859939 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.898824 4806 generic.go:334] "Generic (PLEG): container finished" podID="7f706e15-3a27-484b-a558-c04a6897571b" containerID="686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7" exitCode=0 Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.898927 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerDied","Data":"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7"} Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.898955 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qb75t" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.898996 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qb75t" event={"ID":"7f706e15-3a27-484b-a558-c04a6897571b","Type":"ContainerDied","Data":"49e1c0bd94494b8fcedfc3709dc29a452ed4d30045a25ed3709d91ce72a6490e"} Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.899030 4806 scope.go:117] "RemoveContainer" containerID="686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.900013 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kfmmb" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="registry-server" containerID="cri-o://9bfe52571bb7dcef99eb4d1d1673024d91dd8c898b5e8704517009fb6af13339" gracePeriod=2 Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.926749 4806 scope.go:117] "RemoveContainer" containerID="2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.944452 4806 scope.go:117] "RemoveContainer" containerID="ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.965052 4806 scope.go:117] "RemoveContainer" containerID="686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7" Nov 25 14:58:12 crc kubenswrapper[4806]: E1125 14:58:12.965699 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7\": container with ID starting with 686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7 not found: ID does not exist" containerID="686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.965746 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7"} err="failed to get container status \"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7\": rpc error: code = NotFound desc = could not find container \"686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7\": container with ID starting with 686e1e66eb1da064bc9e6b705295de39d2bb06c1cda31b9001303c2673b275e7 not found: ID does not exist" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.965779 4806 scope.go:117] "RemoveContainer" containerID="2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220" Nov 25 14:58:12 crc kubenswrapper[4806]: E1125 14:58:12.966175 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220\": container with ID starting with 2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220 not found: ID does not exist" containerID="2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.966204 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220"} err="failed to get container status \"2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220\": rpc error: code = NotFound desc = could not find container \"2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220\": container with ID starting with 2c91ee9ade055cac14a22e14971e14bbd8fa2ec864633f51c670b89a1b9a9220 not found: ID does not exist" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.966221 4806 scope.go:117] "RemoveContainer" containerID="ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae" Nov 25 14:58:12 crc kubenswrapper[4806]: E1125 14:58:12.966663 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae\": container with ID starting with ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae not found: ID does not exist" containerID="ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae" Nov 25 14:58:12 crc kubenswrapper[4806]: I1125 14:58:12.966723 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae"} err="failed to get container status \"ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae\": rpc error: code = NotFound desc = could not find container \"ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae\": container with ID starting with ddedbb4922eef5deee7bcd91c4f05063dee053c84379c1a0c6349a168ef658ae not found: ID does not exist" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.038150 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtfsn\" (UniqueName: \"kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn\") pod \"7f706e15-3a27-484b-a558-c04a6897571b\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.038275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content\") pod \"7f706e15-3a27-484b-a558-c04a6897571b\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.038376 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities\") pod \"7f706e15-3a27-484b-a558-c04a6897571b\" (UID: \"7f706e15-3a27-484b-a558-c04a6897571b\") " Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.042471 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities" (OuterVolumeSpecName: "utilities") pod "7f706e15-3a27-484b-a558-c04a6897571b" (UID: "7f706e15-3a27-484b-a558-c04a6897571b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.051867 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn" (OuterVolumeSpecName: "kube-api-access-wtfsn") pod "7f706e15-3a27-484b-a558-c04a6897571b" (UID: "7f706e15-3a27-484b-a558-c04a6897571b"). InnerVolumeSpecName "kube-api-access-wtfsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.068544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f706e15-3a27-484b-a558-c04a6897571b" (UID: "7f706e15-3a27-484b-a558-c04a6897571b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.139987 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.140030 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f706e15-3a27-484b-a558-c04a6897571b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.140042 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtfsn\" (UniqueName: \"kubernetes.io/projected/7f706e15-3a27-484b-a558-c04a6897571b-kube-api-access-wtfsn\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.272162 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.278762 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qb75t"] Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.911779 4806 generic.go:334] "Generic (PLEG): container finished" podID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerID="9bfe52571bb7dcef99eb4d1d1673024d91dd8c898b5e8704517009fb6af13339" exitCode=0 Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.911859 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerDied","Data":"9bfe52571bb7dcef99eb4d1d1673024d91dd8c898b5e8704517009fb6af13339"} Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.912425 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kfmmb" event={"ID":"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e","Type":"ContainerDied","Data":"f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72"} Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.912449 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f876e8204f13c003190c35cbe0b9fe3bdc2d93db3ca91de5067a7dbe720f0b72" Nov 25 14:58:13 crc kubenswrapper[4806]: I1125 14:58:13.916118 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.051806 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities\") pod \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.052014 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content\") pod \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.052075 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrdr\" (UniqueName: \"kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr\") pod \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\" (UID: \"6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e\") " Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.052626 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities" (OuterVolumeSpecName: "utilities") pod "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" (UID: "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.064516 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr" (OuterVolumeSpecName: "kube-api-access-pkrdr") pod "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" (UID: "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e"). InnerVolumeSpecName "kube-api-access-pkrdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.098994 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f706e15-3a27-484b-a558-c04a6897571b" path="/var/lib/kubelet/pods/7f706e15-3a27-484b-a558-c04a6897571b/volumes" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.152332 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" (UID: "6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.153689 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrdr\" (UniqueName: \"kubernetes.io/projected/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-kube-api-access-pkrdr\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.153743 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.153756 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.924948 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kfmmb" Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.971378 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:58:14 crc kubenswrapper[4806]: I1125 14:58:14.977267 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kfmmb"] Nov 25 14:58:16 crc kubenswrapper[4806]: I1125 14:58:16.096890 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" path="/var/lib/kubelet/pods/6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e/volumes" Nov 25 14:59:18 crc kubenswrapper[4806]: I1125 14:59:18.935051 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:59:18 crc kubenswrapper[4806]: I1125 14:59:18.936061 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.548897 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.551592 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sxhr5" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="registry-server" containerID="cri-o://77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de" gracePeriod=30 Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.562823 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.563178 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g5jl6" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="registry-server" containerID="cri-o://c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29" gracePeriod=30 Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.572073 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.572936 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" containerID="cri-o://a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e" gracePeriod=30 Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.582560 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.583147 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7jdkl" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="registry-server" containerID="cri-o://c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846" gracePeriod=30 Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.592356 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.592746 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n942l" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="registry-server" containerID="cri-o://d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0" gracePeriod=30 Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.602779 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqc2s"] Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603310 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603350 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603362 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0fb925-8b3c-493a-9f05-a35a9f7be868" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603370 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0fb925-8b3c-493a-9f05-a35a9f7be868" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603383 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603390 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603403 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603415 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603439 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603448 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603459 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603466 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603475 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5d1ff40-c1d0-4be1-95ec-7da15553481f" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603485 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5d1ff40-c1d0-4be1-95ec-7da15553481f" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603493 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603501 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603513 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603522 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603542 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603549 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603558 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603565 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603575 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603582 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="extract-content" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603600 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603608 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: E1125 14:59:37.603622 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603633 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="extract-utilities" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603772 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f706e15-3a27-484b-a558-c04a6897571b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603790 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0fb925-8b3c-493a-9f05-a35a9f7be868" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603799 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a4ea0eb-5662-4e5b-a20b-7528dcbe5c7e" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603808 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5d1ff40-c1d0-4be1-95ec-7da15553481f" containerName="pruner" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603823 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c38c71a-804c-42db-a65a-70b5fbe67b87" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.603831 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b92c54b-a219-4ef0-998a-e5a2bac20e0b" containerName="registry-server" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.606150 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.633980 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqc2s"] Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.727621 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zm89\" (UniqueName: \"kubernetes.io/projected/257fb937-19f0-48d9-8ea3-7897f5405a87-kube-api-access-8zm89\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.728032 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.728094 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.829612 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.829733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zm89\" (UniqueName: \"kubernetes.io/projected/257fb937-19f0-48d9-8ea3-7897f5405a87-kube-api-access-8zm89\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.829779 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.831517 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.838809 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/257fb937-19f0-48d9-8ea3-7897f5405a87-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.850370 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zm89\" (UniqueName: \"kubernetes.io/projected/257fb937-19f0-48d9-8ea3-7897f5405a87-kube-api-access-8zm89\") pod \"marketplace-operator-79b997595-rqc2s\" (UID: \"257fb937-19f0-48d9-8ea3-7897f5405a87\") " pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.929945 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:37 crc kubenswrapper[4806]: I1125 14:59:37.986497 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.001489 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.006691 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.031519 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.095571 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.134264 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content\") pod \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.134449 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f47j\" (UniqueName: \"kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j\") pod \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.134643 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content\") pod \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.135631 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content\") pod \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.135730 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thqrn\" (UniqueName: \"kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn\") pod \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.135795 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr2pl\" (UniqueName: \"kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl\") pod \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136095 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities\") pod \"24692166-ec81-42ad-9887-f07eb242a4bc\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136130 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities\") pod \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\" (UID: \"28fa29ec-8177-41d4-bd11-9398fd0f2aa3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities\") pod \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136273 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257pp\" (UniqueName: \"kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp\") pod \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\" (UID: \"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136376 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") pod \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136448 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csg5g\" (UniqueName: \"kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g\") pod \"24692166-ec81-42ad-9887-f07eb242a4bc\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136511 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content\") pod \"24692166-ec81-42ad-9887-f07eb242a4bc\" (UID: \"24692166-ec81-42ad-9887-f07eb242a4bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136541 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities\") pod \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\" (UID: \"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.136609 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") pod \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\" (UID: \"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc\") " Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.137585 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities" (OuterVolumeSpecName: "utilities") pod "24692166-ec81-42ad-9887-f07eb242a4bc" (UID: "24692166-ec81-42ad-9887-f07eb242a4bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.137711 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities" (OuterVolumeSpecName: "utilities") pod "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" (UID: "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.137942 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities" (OuterVolumeSpecName: "utilities") pod "28fa29ec-8177-41d4-bd11-9398fd0f2aa3" (UID: "28fa29ec-8177-41d4-bd11-9398fd0f2aa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.138496 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" (UID: "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.138787 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities" (OuterVolumeSpecName: "utilities") pod "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" (UID: "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.140506 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g" (OuterVolumeSpecName: "kube-api-access-csg5g") pod "24692166-ec81-42ad-9887-f07eb242a4bc" (UID: "24692166-ec81-42ad-9887-f07eb242a4bc"). InnerVolumeSpecName "kube-api-access-csg5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.143077 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn" (OuterVolumeSpecName: "kube-api-access-thqrn") pod "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" (UID: "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3"). InnerVolumeSpecName "kube-api-access-thqrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.164986 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp" (OuterVolumeSpecName: "kube-api-access-257pp") pod "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" (UID: "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27"). InnerVolumeSpecName "kube-api-access-257pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.165274 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" (UID: "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.165270 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl" (OuterVolumeSpecName: "kube-api-access-tr2pl") pod "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" (UID: "c14a961b-4eb5-4a10-abe7-bdd5ddff30bc"). InnerVolumeSpecName "kube-api-access-tr2pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.172166 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j" (OuterVolumeSpecName: "kube-api-access-6f47j") pod "28fa29ec-8177-41d4-bd11-9398fd0f2aa3" (UID: "28fa29ec-8177-41d4-bd11-9398fd0f2aa3"). InnerVolumeSpecName "kube-api-access-6f47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.180936 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28fa29ec-8177-41d4-bd11-9398fd0f2aa3" (UID: "28fa29ec-8177-41d4-bd11-9398fd0f2aa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.235428 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" (UID: "87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237737 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237773 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257pp\" (UniqueName: \"kubernetes.io/projected/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-kube-api-access-257pp\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237789 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237800 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csg5g\" (UniqueName: \"kubernetes.io/projected/24692166-ec81-42ad-9887-f07eb242a4bc-kube-api-access-csg5g\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237811 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237821 4806 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237833 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237845 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f47j\" (UniqueName: \"kubernetes.io/projected/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-kube-api-access-6f47j\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237852 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237862 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thqrn\" (UniqueName: \"kubernetes.io/projected/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-kube-api-access-thqrn\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237871 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr2pl\" (UniqueName: \"kubernetes.io/projected/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc-kube-api-access-tr2pl\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237880 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.237888 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28fa29ec-8177-41d4-bd11-9398fd0f2aa3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.241411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" (UID: "a8eb172a-99cc-46c1-9bd2-827dcb3da2c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.257679 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24692166-ec81-42ad-9887-f07eb242a4bc" (UID: "24692166-ec81-42ad-9887-f07eb242a4bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.339839 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24692166-ec81-42ad-9887-f07eb242a4bc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.340645 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.419437 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.474949 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rqc2s"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.593041 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" event={"ID":"257fb937-19f0-48d9-8ea3-7897f5405a87","Type":"ContainerStarted","Data":"98130beefd2c853b47621bbcc62bdec8967a4e4e366044ed8ac32b1e0a4cecbf"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.594752 4806 generic.go:334] "Generic (PLEG): container finished" podID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerID="a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.594856 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" event={"ID":"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc","Type":"ContainerDied","Data":"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.594942 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" event={"ID":"c14a961b-4eb5-4a10-abe7-bdd5ddff30bc","Type":"ContainerDied","Data":"9d3f05fce218e60204e82981da82c6aad5de6ff37630480238a4caf975fafc5a"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.594972 4806 scope.go:117] "RemoveContainer" containerID="a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.594881 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gm728" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.598292 4806 generic.go:334] "Generic (PLEG): container finished" podID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerID="77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.598565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerDied","Data":"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.598697 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sxhr5" event={"ID":"87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27","Type":"ContainerDied","Data":"05376d784e0fe097057cd7d1950158740a1053ff72b88cf11401c13960a2f395"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.598867 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sxhr5" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.621493 4806 generic.go:334] "Generic (PLEG): container finished" podID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerID="c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.621710 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerDied","Data":"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.621777 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g5jl6" event={"ID":"a8eb172a-99cc-46c1-9bd2-827dcb3da2c3","Type":"ContainerDied","Data":"28961cc4e4b1043cabfdff98a3a51ad04c973e03f000dc31513af3ea628fd506"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.622036 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g5jl6" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.627135 4806 scope.go:117] "RemoveContainer" containerID="a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.627912 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e\": container with ID starting with a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e not found: ID does not exist" containerID="a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.627955 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e"} err="failed to get container status \"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e\": rpc error: code = NotFound desc = could not find container \"a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e\": container with ID starting with a030d09224de7e9aaed2a591502fd2985ae1deb018a66db0460128b7bf2fc34e not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.627987 4806 scope.go:117] "RemoveContainer" containerID="77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.629396 4806 generic.go:334] "Generic (PLEG): container finished" podID="24692166-ec81-42ad-9887-f07eb242a4bc" containerID="d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.629486 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerDied","Data":"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.629528 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n942l" event={"ID":"24692166-ec81-42ad-9887-f07eb242a4bc","Type":"ContainerDied","Data":"493af9e254ca40e661cf0720bcb4bb7f15d6e418895a360d4aba1a72951d1186"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.629491 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n942l" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.636542 4806 generic.go:334] "Generic (PLEG): container finished" podID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerID="c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.636617 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerDied","Data":"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.636656 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7jdkl" event={"ID":"28fa29ec-8177-41d4-bd11-9398fd0f2aa3","Type":"ContainerDied","Data":"e0b1ee3d49239619203271499da2179148ad1925ed654ea149f6affa68e88fbd"} Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.636799 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7jdkl" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.669453 4806 scope.go:117] "RemoveContainer" containerID="c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.698814 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.705873 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sxhr5"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.740217 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.753447 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gm728"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.762018 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.763659 4806 scope.go:117] "RemoveContainer" containerID="560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.774308 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n942l"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.783328 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.788015 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g5jl6"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.789796 4806 scope.go:117] "RemoveContainer" containerID="77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.792032 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de\": container with ID starting with 77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de not found: ID does not exist" containerID="77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.792115 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de"} err="failed to get container status \"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de\": rpc error: code = NotFound desc = could not find container \"77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de\": container with ID starting with 77d036085e283198c939d4e1d025bccbd7b0c12c48b922c84f168d7f2d61e1de not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.792168 4806 scope.go:117] "RemoveContainer" containerID="c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.797773 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab\": container with ID starting with c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab not found: ID does not exist" containerID="c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.797842 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab"} err="failed to get container status \"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab\": rpc error: code = NotFound desc = could not find container \"c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab\": container with ID starting with c0899abd01e677663d4e55041692005926e8497d186330f2c5bb99bd15fe56ab not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.797890 4806 scope.go:117] "RemoveContainer" containerID="560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.798308 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc\": container with ID starting with 560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc not found: ID does not exist" containerID="560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.798359 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc"} err="failed to get container status \"560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc\": rpc error: code = NotFound desc = could not find container \"560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc\": container with ID starting with 560f68c4f7fcc1317956cf1927f99da275a4c7bb1c6e28a4b01325f756fcdbfc not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.798380 4806 scope.go:117] "RemoveContainer" containerID="c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.804516 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.808109 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7jdkl"] Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.815798 4806 scope.go:117] "RemoveContainer" containerID="06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.836362 4806 scope.go:117] "RemoveContainer" containerID="3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.856877 4806 scope.go:117] "RemoveContainer" containerID="c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.857589 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29\": container with ID starting with c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29 not found: ID does not exist" containerID="c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.857638 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29"} err="failed to get container status \"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29\": rpc error: code = NotFound desc = could not find container \"c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29\": container with ID starting with c12cf7034551cf8382909516ce45a3b8e604dbcf8d1c539fe10d06ba0439ab29 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.857677 4806 scope.go:117] "RemoveContainer" containerID="06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.861233 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f\": container with ID starting with 06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f not found: ID does not exist" containerID="06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.861828 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f"} err="failed to get container status \"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f\": rpc error: code = NotFound desc = could not find container \"06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f\": container with ID starting with 06dd4201a8e1bc98353ab0c2387f7ba05ddaf4a9f1901671c469624309e1fe0f not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.861862 4806 scope.go:117] "RemoveContainer" containerID="3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.863736 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338\": container with ID starting with 3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338 not found: ID does not exist" containerID="3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.863766 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338"} err="failed to get container status \"3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338\": rpc error: code = NotFound desc = could not find container \"3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338\": container with ID starting with 3dad9624be3468a34d67cf6ba51229e75daa856f8609db46e2a23188feb26338 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.863789 4806 scope.go:117] "RemoveContainer" containerID="d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.884223 4806 scope.go:117] "RemoveContainer" containerID="ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.904346 4806 scope.go:117] "RemoveContainer" containerID="c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.922818 4806 scope.go:117] "RemoveContainer" containerID="d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.924135 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0\": container with ID starting with d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0 not found: ID does not exist" containerID="d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.924199 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0"} err="failed to get container status \"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0\": rpc error: code = NotFound desc = could not find container \"d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0\": container with ID starting with d85839b9e7c34911bee5d36185bcd325f885bc87a631972a31d0975077550ff0 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.924246 4806 scope.go:117] "RemoveContainer" containerID="ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.924798 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e\": container with ID starting with ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e not found: ID does not exist" containerID="ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.924862 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e"} err="failed to get container status \"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e\": rpc error: code = NotFound desc = could not find container \"ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e\": container with ID starting with ea6d3ea9d4671ec214fcbf7ddb77048dea72dbd5b7159fbf6b183a75a51af51e not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.924897 4806 scope.go:117] "RemoveContainer" containerID="c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.925587 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1\": container with ID starting with c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1 not found: ID does not exist" containerID="c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.925618 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1"} err="failed to get container status \"c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1\": rpc error: code = NotFound desc = could not find container \"c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1\": container with ID starting with c01395d597f8f6098a83debfac21ea5ab750f3bd886fe7156c06b0c5a08879d1 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.925635 4806 scope.go:117] "RemoveContainer" containerID="c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.942025 4806 scope.go:117] "RemoveContainer" containerID="4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.963537 4806 scope.go:117] "RemoveContainer" containerID="89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.981485 4806 scope.go:117] "RemoveContainer" containerID="c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.982972 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846\": container with ID starting with c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846 not found: ID does not exist" containerID="c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.983060 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846"} err="failed to get container status \"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846\": rpc error: code = NotFound desc = could not find container \"c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846\": container with ID starting with c50b07e3889fbff01e0ba75fa738fdb92e06c935083e405c0fcd313d5bcaf846 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.983112 4806 scope.go:117] "RemoveContainer" containerID="4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.984463 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8\": container with ID starting with 4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8 not found: ID does not exist" containerID="4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.984546 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8"} err="failed to get container status \"4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8\": rpc error: code = NotFound desc = could not find container \"4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8\": container with ID starting with 4f67b2960e182f278fd11e4f99ee5de4c51c5c0609797ec90ebe5790aef77cb8 not found: ID does not exist" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.984601 4806 scope.go:117] "RemoveContainer" containerID="89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a" Nov 25 14:59:38 crc kubenswrapper[4806]: E1125 14:59:38.985069 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a\": container with ID starting with 89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a not found: ID does not exist" containerID="89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a" Nov 25 14:59:38 crc kubenswrapper[4806]: I1125 14:59:38.985109 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a"} err="failed to get container status \"89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a\": rpc error: code = NotFound desc = could not find container \"89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a\": container with ID starting with 89f2b75a0b5d013e7677635f260429cba076c65ee450c83625ddfa39d9719e5a not found: ID does not exist" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.648160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" event={"ID":"257fb937-19f0-48d9-8ea3-7897f5405a87","Type":"ContainerStarted","Data":"fdba136460f66a850fe99f04142921e062bcd803128cd8ed59b952aa5ad7be32"} Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.651675 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.655039 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.675380 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rqc2s" podStartSLOduration=2.6753444440000003 podStartE2EDuration="2.675344444s" podCreationTimestamp="2025-11-25 14:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:39.674648816 +0000 UTC m=+412.326791247" watchObservedRunningTime="2025-11-25 14:59:39.675344444 +0000 UTC m=+412.327486855" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.772703 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6mnqv"] Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773019 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773036 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773045 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773051 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773063 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773070 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773079 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773085 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773092 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773097 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773108 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773115 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773127 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773134 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773143 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773149 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773157 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773163 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773173 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773179 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773190 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773196 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773204 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773210 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="extract-content" Nov 25 14:59:39 crc kubenswrapper[4806]: E1125 14:59:39.773219 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773225 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="extract-utilities" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773381 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773396 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" containerName="marketplace-operator" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773408 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773414 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.773422 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" containerName="registry-server" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.774294 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.776969 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.794099 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mnqv"] Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.976240 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx55d\" (UniqueName: \"kubernetes.io/projected/9619fb42-e746-4c18-82c8-9e55824d5199-kube-api-access-kx55d\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.976394 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-utilities\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.976480 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-catalog-content\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.978839 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.981261 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:39 crc kubenswrapper[4806]: I1125 14:59:39.993477 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.014013 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.077686 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.077767 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtwl\" (UniqueName: \"kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.077823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-catalog-content\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.078090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx55d\" (UniqueName: \"kubernetes.io/projected/9619fb42-e746-4c18-82c8-9e55824d5199-kube-api-access-kx55d\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.078267 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.078347 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-utilities\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.078409 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-catalog-content\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.078866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9619fb42-e746-4c18-82c8-9e55824d5199-utilities\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.097507 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24692166-ec81-42ad-9887-f07eb242a4bc" path="/var/lib/kubelet/pods/24692166-ec81-42ad-9887-f07eb242a4bc/volumes" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.098274 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28fa29ec-8177-41d4-bd11-9398fd0f2aa3" path="/var/lib/kubelet/pods/28fa29ec-8177-41d4-bd11-9398fd0f2aa3/volumes" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.099074 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27" path="/var/lib/kubelet/pods/87e6bf46-e6fe-4b9f-abbc-d6cb7c682b27/volumes" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.100912 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8eb172a-99cc-46c1-9bd2-827dcb3da2c3" path="/var/lib/kubelet/pods/a8eb172a-99cc-46c1-9bd2-827dcb3da2c3/volumes" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.101694 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c14a961b-4eb5-4a10-abe7-bdd5ddff30bc" path="/var/lib/kubelet/pods/c14a961b-4eb5-4a10-abe7-bdd5ddff30bc/volumes" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.103431 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx55d\" (UniqueName: \"kubernetes.io/projected/9619fb42-e746-4c18-82c8-9e55824d5199-kube-api-access-kx55d\") pod \"redhat-marketplace-6mnqv\" (UID: \"9619fb42-e746-4c18-82c8-9e55824d5199\") " pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.179643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smtwl\" (UniqueName: \"kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.180274 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.180314 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.180928 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.181025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.203049 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtwl\" (UniqueName: \"kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl\") pod \"redhat-operators-fzfmm\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.309288 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.393971 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.670731 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 14:59:40 crc kubenswrapper[4806]: I1125 14:59:40.786374 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6mnqv"] Nov 25 14:59:40 crc kubenswrapper[4806]: W1125 14:59:40.805841 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9619fb42_e746_4c18_82c8_9e55824d5199.slice/crio-26d792996949e30110d1c8a74647a631b1407200d16c020aa58ae45b2bbffc91 WatchSource:0}: Error finding container 26d792996949e30110d1c8a74647a631b1407200d16c020aa58ae45b2bbffc91: Status 404 returned error can't find the container with id 26d792996949e30110d1c8a74647a631b1407200d16c020aa58ae45b2bbffc91 Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.675805 4806 generic.go:334] "Generic (PLEG): container finished" podID="9619fb42-e746-4c18-82c8-9e55824d5199" containerID="e3563e21b36b9dac1dcd20c2a24211d081d1696528fcd160443f61c2212d4a3d" exitCode=0 Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.675918 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mnqv" event={"ID":"9619fb42-e746-4c18-82c8-9e55824d5199","Type":"ContainerDied","Data":"e3563e21b36b9dac1dcd20c2a24211d081d1696528fcd160443f61c2212d4a3d"} Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.676441 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mnqv" event={"ID":"9619fb42-e746-4c18-82c8-9e55824d5199","Type":"ContainerStarted","Data":"26d792996949e30110d1c8a74647a631b1407200d16c020aa58ae45b2bbffc91"} Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.683197 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerID="b75bd279d671b9c780849b278e146db9c374d1b4735ec03dd44f86d93b13b172" exitCode=0 Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.683321 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerDied","Data":"b75bd279d671b9c780849b278e146db9c374d1b4735ec03dd44f86d93b13b172"} Nov 25 14:59:41 crc kubenswrapper[4806]: I1125 14:59:41.683374 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerStarted","Data":"7f92b765ae52658ae8e12218d63d50dd7302d67f7e93ae26dde20b573c95a126"} Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.173529 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.175543 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.178655 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.186337 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.212391 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.212441 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.212488 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlw75\" (UniqueName: \"kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.313407 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlw75\" (UniqueName: \"kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.313810 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.313844 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.314929 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.315098 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.342649 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlw75\" (UniqueName: \"kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75\") pod \"community-operators-ksqkw\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.376204 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gc92n"] Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.377665 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.381067 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.399847 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc92n"] Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.415384 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8q59\" (UniqueName: \"kubernetes.io/projected/6be68968-ad7e-458f-98a6-f3625aecb774-kube-api-access-h8q59\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.415758 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-utilities\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.415910 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-catalog-content\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.501531 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.516461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-utilities\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.516508 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-catalog-content\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.516552 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8q59\" (UniqueName: \"kubernetes.io/projected/6be68968-ad7e-458f-98a6-f3625aecb774-kube-api-access-h8q59\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.517095 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-utilities\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.517261 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6be68968-ad7e-458f-98a6-f3625aecb774-catalog-content\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.537049 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8q59\" (UniqueName: \"kubernetes.io/projected/6be68968-ad7e-458f-98a6-f3625aecb774-kube-api-access-h8q59\") pod \"certified-operators-gc92n\" (UID: \"6be68968-ad7e-458f-98a6-f3625aecb774\") " pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.707712 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:42 crc kubenswrapper[4806]: I1125 14:59:42.711466 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 14:59:42 crc kubenswrapper[4806]: W1125 14:59:42.732712 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29afdfec_4b9d_40b8_a63d_11ffb2f170c1.slice/crio-31fd8bc76da412ddfdf80d65e5803779d94401558d92f9b6c1cf4d34dc820abc WatchSource:0}: Error finding container 31fd8bc76da412ddfdf80d65e5803779d94401558d92f9b6c1cf4d34dc820abc: Status 404 returned error can't find the container with id 31fd8bc76da412ddfdf80d65e5803779d94401558d92f9b6c1cf4d34dc820abc Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.062503 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gc92n"] Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.697856 4806 generic.go:334] "Generic (PLEG): container finished" podID="6be68968-ad7e-458f-98a6-f3625aecb774" containerID="e1f9a2c7ba19ba3556732fe4af95539a435c0f0c8d317d97390df6ba11d24a7c" exitCode=0 Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.697990 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc92n" event={"ID":"6be68968-ad7e-458f-98a6-f3625aecb774","Type":"ContainerDied","Data":"e1f9a2c7ba19ba3556732fe4af95539a435c0f0c8d317d97390df6ba11d24a7c"} Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.698459 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc92n" event={"ID":"6be68968-ad7e-458f-98a6-f3625aecb774","Type":"ContainerStarted","Data":"1a4252b9de46892928ffb1993d0449d7ba0ad7cec4dedb7435df645f21086e3d"} Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.700709 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerID="fed56d81993f0c874c40e4a2d8e0c87209587ed0100524fea753bc3b66aacc6d" exitCode=0 Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.700772 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerDied","Data":"fed56d81993f0c874c40e4a2d8e0c87209587ed0100524fea753bc3b66aacc6d"} Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.707915 4806 generic.go:334] "Generic (PLEG): container finished" podID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerID="0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9" exitCode=0 Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.708013 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerDied","Data":"0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9"} Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.708047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerStarted","Data":"31fd8bc76da412ddfdf80d65e5803779d94401558d92f9b6c1cf4d34dc820abc"} Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.712443 4806 generic.go:334] "Generic (PLEG): container finished" podID="9619fb42-e746-4c18-82c8-9e55824d5199" containerID="9a72f5402f856a6d8d0d7137a3bdc52b25f7075f3c0300b71b97ce129fb82167" exitCode=0 Nov 25 14:59:43 crc kubenswrapper[4806]: I1125 14:59:43.712477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mnqv" event={"ID":"9619fb42-e746-4c18-82c8-9e55824d5199","Type":"ContainerDied","Data":"9a72f5402f856a6d8d0d7137a3bdc52b25f7075f3c0300b71b97ce129fb82167"} Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.720521 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerStarted","Data":"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32"} Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.722923 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6mnqv" event={"ID":"9619fb42-e746-4c18-82c8-9e55824d5199","Type":"ContainerStarted","Data":"56dbd9e86a1ddba444c0eb315a553bf0b692b60cc58eaeb824609d80591d2541"} Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.724664 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc92n" event={"ID":"6be68968-ad7e-458f-98a6-f3625aecb774","Type":"ContainerStarted","Data":"0a7a8089c2f7241495ad8c0765db3fd4b14d51a07179a88d410d0667af34b3b1"} Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.726876 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerStarted","Data":"798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18"} Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.752548 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6mnqv" podStartSLOduration=3.247184625 podStartE2EDuration="5.752518846s" podCreationTimestamp="2025-11-25 14:59:39 +0000 UTC" firstStartedPulling="2025-11-25 14:59:41.6786762 +0000 UTC m=+414.330818631" lastFinishedPulling="2025-11-25 14:59:44.184010441 +0000 UTC m=+416.836152852" observedRunningTime="2025-11-25 14:59:44.750479244 +0000 UTC m=+417.402621685" watchObservedRunningTime="2025-11-25 14:59:44.752518846 +0000 UTC m=+417.404661267" Nov 25 14:59:44 crc kubenswrapper[4806]: I1125 14:59:44.773201 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fzfmm" podStartSLOduration=3.346712362 podStartE2EDuration="5.773181547s" podCreationTimestamp="2025-11-25 14:59:39 +0000 UTC" firstStartedPulling="2025-11-25 14:59:41.685188757 +0000 UTC m=+414.337331168" lastFinishedPulling="2025-11-25 14:59:44.111657932 +0000 UTC m=+416.763800353" observedRunningTime="2025-11-25 14:59:44.768873606 +0000 UTC m=+417.421016047" watchObservedRunningTime="2025-11-25 14:59:44.773181547 +0000 UTC m=+417.425323958" Nov 25 14:59:45 crc kubenswrapper[4806]: I1125 14:59:45.742813 4806 generic.go:334] "Generic (PLEG): container finished" podID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerID="58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32" exitCode=0 Nov 25 14:59:45 crc kubenswrapper[4806]: I1125 14:59:45.742889 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerDied","Data":"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32"} Nov 25 14:59:45 crc kubenswrapper[4806]: I1125 14:59:45.747485 4806 generic.go:334] "Generic (PLEG): container finished" podID="6be68968-ad7e-458f-98a6-f3625aecb774" containerID="0a7a8089c2f7241495ad8c0765db3fd4b14d51a07179a88d410d0667af34b3b1" exitCode=0 Nov 25 14:59:45 crc kubenswrapper[4806]: I1125 14:59:45.749103 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc92n" event={"ID":"6be68968-ad7e-458f-98a6-f3625aecb774","Type":"ContainerDied","Data":"0a7a8089c2f7241495ad8c0765db3fd4b14d51a07179a88d410d0667af34b3b1"} Nov 25 14:59:47 crc kubenswrapper[4806]: I1125 14:59:47.764088 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gc92n" event={"ID":"6be68968-ad7e-458f-98a6-f3625aecb774","Type":"ContainerStarted","Data":"458561da7501aae44d70ae2f32991252a629745d1d4f0118cd640b0429f66a1f"} Nov 25 14:59:47 crc kubenswrapper[4806]: I1125 14:59:47.769719 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerStarted","Data":"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5"} Nov 25 14:59:47 crc kubenswrapper[4806]: I1125 14:59:47.788100 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gc92n" podStartSLOduration=3.222247243 podStartE2EDuration="5.7880781s" podCreationTimestamp="2025-11-25 14:59:42 +0000 UTC" firstStartedPulling="2025-11-25 14:59:43.701874455 +0000 UTC m=+416.354016886" lastFinishedPulling="2025-11-25 14:59:46.267705332 +0000 UTC m=+418.919847743" observedRunningTime="2025-11-25 14:59:47.788015868 +0000 UTC m=+420.440158279" watchObservedRunningTime="2025-11-25 14:59:47.7880781 +0000 UTC m=+420.440220511" Nov 25 14:59:48 crc kubenswrapper[4806]: I1125 14:59:48.935373 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:59:48 crc kubenswrapper[4806]: I1125 14:59:48.935898 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.310346 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.310945 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.358304 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.384553 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksqkw" podStartSLOduration=5.905316171 podStartE2EDuration="8.384528652s" podCreationTimestamp="2025-11-25 14:59:42 +0000 UTC" firstStartedPulling="2025-11-25 14:59:43.710672041 +0000 UTC m=+416.362814452" lastFinishedPulling="2025-11-25 14:59:46.189884522 +0000 UTC m=+418.842026933" observedRunningTime="2025-11-25 14:59:47.812396764 +0000 UTC m=+420.464539175" watchObservedRunningTime="2025-11-25 14:59:50.384528652 +0000 UTC m=+423.036671063" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.394815 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.394892 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.436132 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.849337 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 14:59:50 crc kubenswrapper[4806]: I1125 14:59:50.849566 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6mnqv" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.502442 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.503006 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.555935 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.708731 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.708812 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.763269 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.875605 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 14:59:52 crc kubenswrapper[4806]: I1125 14:59:52.876882 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gc92n" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.140107 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c"] Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.141757 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.145451 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.145765 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.153494 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c"] Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.294508 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.295017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpmts\" (UniqueName: \"kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.295059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.397243 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.397362 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpmts\" (UniqueName: \"kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.397395 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.398443 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.409450 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.422898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpmts\" (UniqueName: \"kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts\") pod \"collect-profiles-29401380-klm5c\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.483011 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.704278 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c"] Nov 25 15:00:00 crc kubenswrapper[4806]: W1125 15:00:00.719269 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52601663_98d0_43a4_ab46_0f671d08c3bd.slice/crio-9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1 WatchSource:0}: Error finding container 9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1: Status 404 returned error can't find the container with id 9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1 Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.875471 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" event={"ID":"52601663-98d0-43a4-ab46-0f671d08c3bd","Type":"ContainerStarted","Data":"25a4014bc9ea1641aba8f9efa644752d9346211d0a7d73595265635a38272cab"} Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.875533 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" event={"ID":"52601663-98d0-43a4-ab46-0f671d08c3bd","Type":"ContainerStarted","Data":"9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1"} Nov 25 15:00:00 crc kubenswrapper[4806]: I1125 15:00:00.900611 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" podStartSLOduration=0.900578351 podStartE2EDuration="900.578351ms" podCreationTimestamp="2025-11-25 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:00:00.896466806 +0000 UTC m=+433.548609237" watchObservedRunningTime="2025-11-25 15:00:00.900578351 +0000 UTC m=+433.552720762" Nov 25 15:00:01 crc kubenswrapper[4806]: I1125 15:00:01.884738 4806 generic.go:334] "Generic (PLEG): container finished" podID="52601663-98d0-43a4-ab46-0f671d08c3bd" containerID="25a4014bc9ea1641aba8f9efa644752d9346211d0a7d73595265635a38272cab" exitCode=0 Nov 25 15:00:01 crc kubenswrapper[4806]: I1125 15:00:01.884820 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" event={"ID":"52601663-98d0-43a4-ab46-0f671d08c3bd","Type":"ContainerDied","Data":"25a4014bc9ea1641aba8f9efa644752d9346211d0a7d73595265635a38272cab"} Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.182480 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.246333 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpmts\" (UniqueName: \"kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts\") pod \"52601663-98d0-43a4-ab46-0f671d08c3bd\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.253083 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts" (OuterVolumeSpecName: "kube-api-access-fpmts") pod "52601663-98d0-43a4-ab46-0f671d08c3bd" (UID: "52601663-98d0-43a4-ab46-0f671d08c3bd"). InnerVolumeSpecName "kube-api-access-fpmts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.347611 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume\") pod \"52601663-98d0-43a4-ab46-0f671d08c3bd\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.347717 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume\") pod \"52601663-98d0-43a4-ab46-0f671d08c3bd\" (UID: \"52601663-98d0-43a4-ab46-0f671d08c3bd\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.348423 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpmts\" (UniqueName: \"kubernetes.io/projected/52601663-98d0-43a4-ab46-0f671d08c3bd-kube-api-access-fpmts\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.348922 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume" (OuterVolumeSpecName: "config-volume") pod "52601663-98d0-43a4-ab46-0f671d08c3bd" (UID: "52601663-98d0-43a4-ab46-0f671d08c3bd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.353412 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "52601663-98d0-43a4-ab46-0f671d08c3bd" (UID: "52601663-98d0-43a4-ab46-0f671d08c3bd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.449541 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52601663-98d0-43a4-ab46-0f671d08c3bd-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.449591 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/52601663-98d0-43a4-ab46-0f671d08c3bd-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.464118 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" containerID="cri-o://2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b" gracePeriod=15 Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.780295 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.901641 4806 generic.go:334] "Generic (PLEG): container finished" podID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerID="2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b" exitCode=0 Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.901756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" event={"ID":"ca7da513-6cf5-43fc-afbe-ab1c8e785130","Type":"ContainerDied","Data":"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b"} Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.901798 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" event={"ID":"ca7da513-6cf5-43fc-afbe-ab1c8e785130","Type":"ContainerDied","Data":"fcb05b9a4dcfee75c1c6e6cf53effecb6a44f613e0ebd64be2aaf216b3a8f44f"} Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.901808 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-bn2sz" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.901822 4806 scope.go:117] "RemoveContainer" containerID="2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.906279 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.906347 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c" event={"ID":"52601663-98d0-43a4-ab46-0f671d08c3bd","Type":"ContainerDied","Data":"9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1"} Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.906413 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b76ccc5ab7fbd032cdc540f9fdb92b560db7a8a0649d664d680300084b460d1" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.922691 4806 scope.go:117] "RemoveContainer" containerID="2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b" Nov 25 15:00:03 crc kubenswrapper[4806]: E1125 15:00:03.923247 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b\": container with ID starting with 2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b not found: ID does not exist" containerID="2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.923303 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b"} err="failed to get container status \"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b\": rpc error: code = NotFound desc = could not find container \"2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b\": container with ID starting with 2381a4dff84afcf0b68a5fa8c2b3deacc20b184290bc11612f5aa4588075a94b not found: ID does not exist" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955270 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955352 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955389 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955425 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299jk\" (UniqueName: \"kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955449 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955540 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955577 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955626 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955676 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955712 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955732 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.955759 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error\") pod \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\" (UID: \"ca7da513-6cf5-43fc-afbe-ab1c8e785130\") " Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.956986 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.957214 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.957266 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.957758 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.958127 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.963739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk" (OuterVolumeSpecName: "kube-api-access-299jk") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "kube-api-access-299jk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.964496 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.964532 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.964868 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.964977 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.965086 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.967123 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.967863 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4806]: I1125 15:00:03.968352 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ca7da513-6cf5-43fc-afbe-ab1c8e785130" (UID: "ca7da513-6cf5-43fc-afbe-ab1c8e785130"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057048 4806 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057098 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057114 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057125 4806 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057139 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057151 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057161 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-299jk\" (UniqueName: \"kubernetes.io/projected/ca7da513-6cf5-43fc-afbe-ab1c8e785130-kube-api-access-299jk\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057172 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057189 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057201 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057211 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057219 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057228 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.057237 4806 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ca7da513-6cf5-43fc-afbe-ab1c8e785130-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.230299 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 15:00:04 crc kubenswrapper[4806]: I1125 15:00:04.233953 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-bn2sz"] Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.640992 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk"] Nov 25 15:00:05 crc kubenswrapper[4806]: E1125 15:00:05.642297 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52601663-98d0-43a4-ab46-0f671d08c3bd" containerName="collect-profiles" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.642427 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="52601663-98d0-43a4-ab46-0f671d08c3bd" containerName="collect-profiles" Nov 25 15:00:05 crc kubenswrapper[4806]: E1125 15:00:05.642491 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.642543 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.642713 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" containerName="oauth-openshift" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.642823 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="52601663-98d0-43a4-ab46-0f671d08c3bd" containerName="collect-profiles" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.643547 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.647040 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.647202 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.647360 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.647581 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.647360 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.648236 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.649352 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.649700 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.651169 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.651217 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.653936 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.665137 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk"] Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.666053 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.667072 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.667415 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678189 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678240 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-policies\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678455 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-login\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678520 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678570 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678600 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-session\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678636 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-error\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678672 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678699 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678734 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-dir\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678786 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.678789 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780706 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780775 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780819 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-dir\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780881 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5klhf\" (UniqueName: \"kubernetes.io/projected/669dba85-6a0b-43ac-88a3-f59af8b779f0-kube-api-access-5klhf\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780904 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780924 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-policies\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780959 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.780989 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-login\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781021 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781041 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781073 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781096 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-session\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781122 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-error\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.781988 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-dir\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.782906 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-service-ca\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.783008 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-audit-policies\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.783159 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.783972 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.786195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.786286 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.786302 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-error\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.786866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.787381 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.787683 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-session\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.789036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-user-template-login\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.794789 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/669dba85-6a0b-43ac-88a3-f59af8b779f0-v4-0-config-system-router-certs\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.882717 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5klhf\" (UniqueName: \"kubernetes.io/projected/669dba85-6a0b-43ac-88a3-f59af8b779f0-kube-api-access-5klhf\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.905344 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5klhf\" (UniqueName: \"kubernetes.io/projected/669dba85-6a0b-43ac-88a3-f59af8b779f0-kube-api-access-5klhf\") pod \"oauth-openshift-568b8b6cc4-dnbtk\" (UID: \"669dba85-6a0b-43ac-88a3-f59af8b779f0\") " pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:05 crc kubenswrapper[4806]: I1125 15:00:05.962781 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.099158 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca7da513-6cf5-43fc-afbe-ab1c8e785130" path="/var/lib/kubelet/pods/ca7da513-6cf5-43fc-afbe-ab1c8e785130/volumes" Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.182668 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk"] Nov 25 15:00:06 crc kubenswrapper[4806]: W1125 15:00:06.198537 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod669dba85_6a0b_43ac_88a3_f59af8b779f0.slice/crio-72d9d5af2a9226ec1b160467be1bc124a6732ff00afc780dbac98098b1dc19bd WatchSource:0}: Error finding container 72d9d5af2a9226ec1b160467be1bc124a6732ff00afc780dbac98098b1dc19bd: Status 404 returned error can't find the container with id 72d9d5af2a9226ec1b160467be1bc124a6732ff00afc780dbac98098b1dc19bd Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.928954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" event={"ID":"669dba85-6a0b-43ac-88a3-f59af8b779f0","Type":"ContainerStarted","Data":"259c7d92b5ae608182175d72c90c89822003131ebbd200217d1f2dd086eaf3c9"} Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.929470 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" event={"ID":"669dba85-6a0b-43ac-88a3-f59af8b779f0","Type":"ContainerStarted","Data":"72d9d5af2a9226ec1b160467be1bc124a6732ff00afc780dbac98098b1dc19bd"} Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.929622 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:06 crc kubenswrapper[4806]: I1125 15:00:06.951180 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" podStartSLOduration=28.951150691 podStartE2EDuration="28.951150691s" podCreationTimestamp="2025-11-25 14:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:00:06.950975726 +0000 UTC m=+439.603118157" watchObservedRunningTime="2025-11-25 15:00:06.951150691 +0000 UTC m=+439.603293112" Nov 25 15:00:07 crc kubenswrapper[4806]: I1125 15:00:07.387789 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-568b8b6cc4-dnbtk" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.415931 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jbw2"] Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.417948 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.469028 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jbw2"] Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.528953 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529053 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-registry-certificates\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529080 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-bound-sa-token\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529106 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495da8bf-4021-47d2-9103-ab79128ac414-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529141 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-registry-tls\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529168 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgqv\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-kube-api-access-rkgqv\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529196 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-trusted-ca\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.529226 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495da8bf-4021-47d2-9103-ab79128ac414-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.554874 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-registry-certificates\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631149 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-bound-sa-token\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631193 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495da8bf-4021-47d2-9103-ab79128ac414-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631245 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-registry-tls\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631283 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgqv\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-kube-api-access-rkgqv\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631337 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-trusted-ca\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.631373 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495da8bf-4021-47d2-9103-ab79128ac414-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.632292 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/495da8bf-4021-47d2-9103-ab79128ac414-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.632767 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-registry-certificates\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.633270 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/495da8bf-4021-47d2-9103-ab79128ac414-trusted-ca\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.638377 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/495da8bf-4021-47d2-9103-ab79128ac414-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.640303 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-registry-tls\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.652943 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgqv\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-kube-api-access-rkgqv\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.654969 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/495da8bf-4021-47d2-9103-ab79128ac414-bound-sa-token\") pod \"image-registry-66df7c8f76-6jbw2\" (UID: \"495da8bf-4021-47d2-9103-ab79128ac414\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.754472 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:15 crc kubenswrapper[4806]: I1125 15:00:15.964271 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jbw2"] Nov 25 15:00:16 crc kubenswrapper[4806]: I1125 15:00:16.000130 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" event={"ID":"495da8bf-4021-47d2-9103-ab79128ac414","Type":"ContainerStarted","Data":"c931df3279d570aadc5f8a35ca07fed5f44619b4bc9675796797636f815d6cca"} Nov 25 15:00:17 crc kubenswrapper[4806]: I1125 15:00:17.009881 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" event={"ID":"495da8bf-4021-47d2-9103-ab79128ac414","Type":"ContainerStarted","Data":"1cb2f2b6483cfd9a13a69d2202c94714837d23f315d448d412ddd1741154f911"} Nov 25 15:00:17 crc kubenswrapper[4806]: I1125 15:00:17.010490 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:17 crc kubenswrapper[4806]: I1125 15:00:17.035761 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" podStartSLOduration=2.035728506 podStartE2EDuration="2.035728506s" podCreationTimestamp="2025-11-25 15:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:00:17.031395794 +0000 UTC m=+449.683538225" watchObservedRunningTime="2025-11-25 15:00:17.035728506 +0000 UTC m=+449.687870917" Nov 25 15:00:18 crc kubenswrapper[4806]: I1125 15:00:18.935209 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:00:18 crc kubenswrapper[4806]: I1125 15:00:18.935287 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:00:18 crc kubenswrapper[4806]: I1125 15:00:18.935370 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:00:18 crc kubenswrapper[4806]: I1125 15:00:18.936195 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:00:18 crc kubenswrapper[4806]: I1125 15:00:18.936261 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd" gracePeriod=600 Nov 25 15:00:20 crc kubenswrapper[4806]: I1125 15:00:20.040823 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd" exitCode=0 Nov 25 15:00:20 crc kubenswrapper[4806]: I1125 15:00:20.040924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd"} Nov 25 15:00:20 crc kubenswrapper[4806]: I1125 15:00:20.041823 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553"} Nov 25 15:00:20 crc kubenswrapper[4806]: I1125 15:00:20.041859 4806 scope.go:117] "RemoveContainer" containerID="657c6813fefe5a305c46e51a3be90de3dda23db56498959efe41f94d05e8f54d" Nov 25 15:00:35 crc kubenswrapper[4806]: I1125 15:00:35.760457 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6jbw2" Nov 25 15:00:35 crc kubenswrapper[4806]: I1125 15:00:35.827789 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 15:01:00 crc kubenswrapper[4806]: I1125 15:01:00.870536 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" podUID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" containerName="registry" containerID="cri-o://461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901" gracePeriod=30 Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.263860 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.336255 4806 generic.go:334] "Generic (PLEG): container finished" podID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" containerID="461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901" exitCode=0 Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.336333 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" event={"ID":"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf","Type":"ContainerDied","Data":"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901"} Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.336365 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" event={"ID":"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf","Type":"ContainerDied","Data":"5c80833af39e7e12665256aab8990df950e3fa811e931f6fa5009e2ee1a19097"} Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.336376 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-576cp" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.336383 4806 scope.go:117] "RemoveContainer" containerID="461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.357906 4806 scope.go:117] "RemoveContainer" containerID="461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901" Nov 25 15:01:01 crc kubenswrapper[4806]: E1125 15:01:01.358687 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901\": container with ID starting with 461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901 not found: ID does not exist" containerID="461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.358734 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901"} err="failed to get container status \"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901\": rpc error: code = NotFound desc = could not find container \"461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901\": container with ID starting with 461abb528b6fea8b43ea03ea42cad59b45549d5570014393b372fab679cb1901 not found: ID does not exist" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.389853 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.389942 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390009 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390083 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390113 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390187 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l6n4\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390235 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.390553 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\" (UID: \"e7c6a7c5-103e-4287-8e86-a7dbf2b48daf\") " Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.391764 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.391817 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.400682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.401106 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.401306 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.401511 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4" (OuterVolumeSpecName: "kube-api-access-8l6n4") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "kube-api-access-8l6n4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.407245 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.414230 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" (UID: "e7c6a7c5-103e-4287-8e86-a7dbf2b48daf"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493839 4806 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493885 4806 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493894 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l6n4\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-kube-api-access-8l6n4\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493908 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493918 4806 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493926 4806 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.493933 4806 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.665658 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 15:01:01 crc kubenswrapper[4806]: I1125 15:01:01.670053 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-576cp"] Nov 25 15:01:02 crc kubenswrapper[4806]: I1125 15:01:02.098812 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" path="/var/lib/kubelet/pods/e7c6a7c5-103e-4287-8e86-a7dbf2b48daf/volumes" Nov 25 15:01:51 crc kubenswrapper[4806]: I1125 15:01:51.360348 4806 scope.go:117] "RemoveContainer" containerID="51bb7e0026bf2504fc166c7414e205aef1eafb8c21ee2adb5286ffdfa4a304b8" Nov 25 15:02:48 crc kubenswrapper[4806]: I1125 15:02:48.935751 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:02:48 crc kubenswrapper[4806]: I1125 15:02:48.936681 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:03:18 crc kubenswrapper[4806]: I1125 15:03:18.934905 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:03:18 crc kubenswrapper[4806]: I1125 15:03:18.936143 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:03:48 crc kubenswrapper[4806]: I1125 15:03:48.935499 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:03:48 crc kubenswrapper[4806]: I1125 15:03:48.936441 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:03:48 crc kubenswrapper[4806]: I1125 15:03:48.936509 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:03:48 crc kubenswrapper[4806]: I1125 15:03:48.937275 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:03:48 crc kubenswrapper[4806]: I1125 15:03:48.937352 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553" gracePeriod=600 Nov 25 15:03:49 crc kubenswrapper[4806]: I1125 15:03:49.889944 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553" exitCode=0 Nov 25 15:03:49 crc kubenswrapper[4806]: I1125 15:03:49.890001 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553"} Nov 25 15:03:49 crc kubenswrapper[4806]: I1125 15:03:49.890404 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27"} Nov 25 15:03:49 crc kubenswrapper[4806]: I1125 15:03:49.890435 4806 scope.go:117] "RemoveContainer" containerID="86ffef5b64dafeab3b05f5e4a70ac74bb211e3538d488906b2518389de3474fd" Nov 25 15:03:51 crc kubenswrapper[4806]: I1125 15:03:51.426624 4806 scope.go:117] "RemoveContainer" containerID="5063de0d7c9b4ccf73a3f112c3a1b0959ef13a629de31a84a9d8349544d9f90e" Nov 25 15:04:51 crc kubenswrapper[4806]: I1125 15:04:51.467199 4806 scope.go:117] "RemoveContainer" containerID="9bfe52571bb7dcef99eb4d1d1673024d91dd8c898b5e8704517009fb6af13339" Nov 25 15:05:23 crc kubenswrapper[4806]: I1125 15:05:23.670751 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 15:05:23 crc kubenswrapper[4806]: I1125 15:05:23.672032 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" containerID="cri-o://9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2" gracePeriod=30 Nov 25 15:05:23 crc kubenswrapper[4806]: I1125 15:05:23.761638 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 15:05:23 crc kubenswrapper[4806]: I1125 15:05:23.761978 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" containerID="cri-o://40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1" gracePeriod=30 Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.060905 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.108487 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.245666 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config\") pod \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles\") pod \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246400 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert\") pod \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246500 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwpf7\" (UniqueName: \"kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7\") pod \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca\") pod \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\" (UID: \"83db970d-f5a9-4a8f-9c65-0cd2500331b1\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246765 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config\") pod \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246893 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca\") pod \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.246982 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert\") pod \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247091 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr7w6\" (UniqueName: \"kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6\") pod \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\" (UID: \"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957\") " Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247129 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83db970d-f5a9-4a8f-9c65-0cd2500331b1" (UID: "83db970d-f5a9-4a8f-9c65-0cd2500331b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247166 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "83db970d-f5a9-4a8f-9c65-0cd2500331b1" (UID: "83db970d-f5a9-4a8f-9c65-0cd2500331b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247203 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config" (OuterVolumeSpecName: "config") pod "83db970d-f5a9-4a8f-9c65-0cd2500331b1" (UID: "83db970d-f5a9-4a8f-9c65-0cd2500331b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247464 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" (UID: "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.247488 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config" (OuterVolumeSpecName: "config") pod "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" (UID: "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.260768 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6" (OuterVolumeSpecName: "kube-api-access-cr7w6") pod "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" (UID: "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957"). InnerVolumeSpecName "kube-api-access-cr7w6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.261546 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83db970d-f5a9-4a8f-9c65-0cd2500331b1" (UID: "83db970d-f5a9-4a8f-9c65-0cd2500331b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.264699 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" (UID: "d3f9429a-5f3e-45bf-b7cc-dea3bee3e957"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.265162 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7" (OuterVolumeSpecName: "kube-api-access-hwpf7") pod "83db970d-f5a9-4a8f-9c65-0cd2500331b1" (UID: "83db970d-f5a9-4a8f-9c65-0cd2500331b1"). InnerVolumeSpecName "kube-api-access-hwpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348800 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr7w6\" (UniqueName: \"kubernetes.io/projected/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-kube-api-access-cr7w6\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348859 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348871 4806 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348888 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83db970d-f5a9-4a8f-9c65-0cd2500331b1-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348900 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwpf7\" (UniqueName: \"kubernetes.io/projected/83db970d-f5a9-4a8f-9c65-0cd2500331b1-kube-api-access-hwpf7\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348912 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83db970d-f5a9-4a8f-9c65-0cd2500331b1-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348924 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348937 4806 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.348949 4806 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.519141 4806 generic.go:334] "Generic (PLEG): container finished" podID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerID="9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2" exitCode=0 Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.519200 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.519232 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" event={"ID":"83db970d-f5a9-4a8f-9c65-0cd2500331b1","Type":"ContainerDied","Data":"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2"} Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.520119 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-k8p4x" event={"ID":"83db970d-f5a9-4a8f-9c65-0cd2500331b1","Type":"ContainerDied","Data":"a61f11959e3a547f5786697f7734844b2d197305e20dcbc491f04c7528612074"} Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.520153 4806 scope.go:117] "RemoveContainer" containerID="9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.521593 4806 generic.go:334] "Generic (PLEG): container finished" podID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerID="40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1" exitCode=0 Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.521631 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" event={"ID":"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957","Type":"ContainerDied","Data":"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1"} Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.521655 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" event={"ID":"d3f9429a-5f3e-45bf-b7cc-dea3bee3e957","Type":"ContainerDied","Data":"8a2b25d91ae8e8578871bf34fc8a9d3c620bd78f0741a299d315043a9a10fa4b"} Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.521663 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.542986 4806 scope.go:117] "RemoveContainer" containerID="9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2" Nov 25 15:05:24 crc kubenswrapper[4806]: E1125 15:05:24.543738 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2\": container with ID starting with 9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2 not found: ID does not exist" containerID="9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.543791 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2"} err="failed to get container status \"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2\": rpc error: code = NotFound desc = could not find container \"9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2\": container with ID starting with 9f4fd580320462d018db3240e9a6edd085e31d563210f879cf20efec6530fdb2 not found: ID does not exist" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.543826 4806 scope.go:117] "RemoveContainer" containerID="40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.566934 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.570411 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p5tx2"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.571106 4806 scope.go:117] "RemoveContainer" containerID="40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1" Nov 25 15:05:24 crc kubenswrapper[4806]: E1125 15:05:24.574279 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1\": container with ID starting with 40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1 not found: ID does not exist" containerID="40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.574352 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1"} err="failed to get container status \"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1\": rpc error: code = NotFound desc = could not find container \"40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1\": container with ID starting with 40ac7d0dd7d3664c2d446ec66c67d7070625e6ed6d410c2ec87b8e0ed44617d1 not found: ID does not exist" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.579112 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.579409 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-k8p4x"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876035 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv"] Nov 25 15:05:24 crc kubenswrapper[4806]: E1125 15:05:24.876481 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876501 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: E1125 15:05:24.876524 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" containerName="registry" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876532 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" containerName="registry" Nov 25 15:05:24 crc kubenswrapper[4806]: E1125 15:05:24.876543 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876551 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876697 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" containerName="route-controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876710 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c6a7c5-103e-4287-8e86-a7dbf2b48daf" containerName="registry" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.876731 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" containerName="controller-manager" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.877380 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.879755 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.880379 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.881002 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.881103 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.881277 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.881290 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.882682 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.882723 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.885047 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.885655 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.885910 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.886099 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.887687 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.888187 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.896417 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.897944 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv"] Nov 25 15:05:24 crc kubenswrapper[4806]: I1125 15:05:24.901263 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p"] Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060364 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84dfa003-5ddc-442e-bbfd-ef34550a6608-serving-cert\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060430 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-proxy-ca-bundles\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060473 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-client-ca\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhln\" (UniqueName: \"kubernetes.io/projected/84dfa003-5ddc-442e-bbfd-ef34550a6608-kube-api-access-cbhln\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060549 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-client-ca\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060588 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda3c11a-a747-4c03-98c8-abb197eede1d-serving-cert\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060609 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-config\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060635 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g8hd\" (UniqueName: \"kubernetes.io/projected/eda3c11a-a747-4c03-98c8-abb197eede1d-kube-api-access-6g8hd\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.060818 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-config\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.162742 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhln\" (UniqueName: \"kubernetes.io/projected/84dfa003-5ddc-442e-bbfd-ef34550a6608-kube-api-access-cbhln\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.162827 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-client-ca\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.162877 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda3c11a-a747-4c03-98c8-abb197eede1d-serving-cert\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.162909 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-config\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.162934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g8hd\" (UniqueName: \"kubernetes.io/projected/eda3c11a-a747-4c03-98c8-abb197eede1d-kube-api-access-6g8hd\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.163013 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-config\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.163041 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84dfa003-5ddc-442e-bbfd-ef34550a6608-serving-cert\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.163062 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-proxy-ca-bundles\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.163091 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-client-ca\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.164385 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-client-ca\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.164440 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-config\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.164498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-proxy-ca-bundles\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.164913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eda3c11a-a747-4c03-98c8-abb197eede1d-config\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.165808 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84dfa003-5ddc-442e-bbfd-ef34550a6608-client-ca\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.170450 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84dfa003-5ddc-442e-bbfd-ef34550a6608-serving-cert\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.173908 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eda3c11a-a747-4c03-98c8-abb197eede1d-serving-cert\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.182782 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhln\" (UniqueName: \"kubernetes.io/projected/84dfa003-5ddc-442e-bbfd-ef34550a6608-kube-api-access-cbhln\") pod \"route-controller-manager-75f96dbcbf-78k7p\" (UID: \"84dfa003-5ddc-442e-bbfd-ef34550a6608\") " pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.182896 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g8hd\" (UniqueName: \"kubernetes.io/projected/eda3c11a-a747-4c03-98c8-abb197eede1d-kube-api-access-6g8hd\") pod \"controller-manager-5cd5ddb48d-22wgv\" (UID: \"eda3c11a-a747-4c03-98c8-abb197eede1d\") " pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.271346 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.286225 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.561817 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv"] Nov 25 15:05:25 crc kubenswrapper[4806]: I1125 15:05:25.616931 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p"] Nov 25 15:05:25 crc kubenswrapper[4806]: W1125 15:05:25.626995 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84dfa003_5ddc_442e_bbfd_ef34550a6608.slice/crio-56c41b585db83b3ecb2cf8529692bd388022aa970e4db255f947199517ed2767 WatchSource:0}: Error finding container 56c41b585db83b3ecb2cf8529692bd388022aa970e4db255f947199517ed2767: Status 404 returned error can't find the container with id 56c41b585db83b3ecb2cf8529692bd388022aa970e4db255f947199517ed2767 Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.098845 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83db970d-f5a9-4a8f-9c65-0cd2500331b1" path="/var/lib/kubelet/pods/83db970d-f5a9-4a8f-9c65-0cd2500331b1/volumes" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.100477 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3f9429a-5f3e-45bf-b7cc-dea3bee3e957" path="/var/lib/kubelet/pods/d3f9429a-5f3e-45bf-b7cc-dea3bee3e957/volumes" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.552749 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" event={"ID":"84dfa003-5ddc-442e-bbfd-ef34550a6608","Type":"ContainerStarted","Data":"4719031e7999e5da0313974f6e32b5911097cba24d72a1aa97fb7322f3f42e07"} Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.552831 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" event={"ID":"84dfa003-5ddc-442e-bbfd-ef34550a6608","Type":"ContainerStarted","Data":"56c41b585db83b3ecb2cf8529692bd388022aa970e4db255f947199517ed2767"} Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.553028 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.554657 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" event={"ID":"eda3c11a-a747-4c03-98c8-abb197eede1d","Type":"ContainerStarted","Data":"e71ab6a62002ed668c31c741f846ad9f8026c3e50027d6516062fda9eace471b"} Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.554703 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" event={"ID":"eda3c11a-a747-4c03-98c8-abb197eede1d","Type":"ContainerStarted","Data":"e6f7897a47c00af196cf482dd91fe1105614070282a046a74a0ba929fc60f6b7"} Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.554929 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.558785 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.560279 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.575019 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75f96dbcbf-78k7p" podStartSLOduration=3.574992651 podStartE2EDuration="3.574992651s" podCreationTimestamp="2025-11-25 15:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:05:26.571888234 +0000 UTC m=+759.224030675" watchObservedRunningTime="2025-11-25 15:05:26.574992651 +0000 UTC m=+759.227135052" Nov 25 15:05:26 crc kubenswrapper[4806]: I1125 15:05:26.596941 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cd5ddb48d-22wgv" podStartSLOduration=3.596906718 podStartE2EDuration="3.596906718s" podCreationTimestamp="2025-11-25 15:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:05:26.593279116 +0000 UTC m=+759.245421527" watchObservedRunningTime="2025-11-25 15:05:26.596906718 +0000 UTC m=+759.249049129" Nov 25 15:05:31 crc kubenswrapper[4806]: I1125 15:05:31.977218 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2"] Nov 25 15:05:31 crc kubenswrapper[4806]: I1125 15:05:31.979744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:31 crc kubenswrapper[4806]: I1125 15:05:31.982890 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:05:31 crc kubenswrapper[4806]: I1125 15:05:31.985476 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2"] Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.073106 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.073163 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.073229 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jms5v\" (UniqueName: \"kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.174429 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.174502 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.174547 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jms5v\" (UniqueName: \"kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.175244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.176517 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.200007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jms5v\" (UniqueName: \"kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.306195 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.364707 4806 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 15:05:32 crc kubenswrapper[4806]: I1125 15:05:32.800258 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2"] Nov 25 15:05:33 crc kubenswrapper[4806]: I1125 15:05:33.599668 4806 generic.go:334] "Generic (PLEG): container finished" podID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerID="e23fb5a942bf8996d78c7e20071b34830391bc4036784ed398a64f387d1e4c17" exitCode=0 Nov 25 15:05:33 crc kubenswrapper[4806]: I1125 15:05:33.599978 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" event={"ID":"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0","Type":"ContainerDied","Data":"e23fb5a942bf8996d78c7e20071b34830391bc4036784ed398a64f387d1e4c17"} Nov 25 15:05:33 crc kubenswrapper[4806]: I1125 15:05:33.600050 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" event={"ID":"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0","Type":"ContainerStarted","Data":"19b30215411013a3a14ff4de0beecb456fa63ae9f686099e7788429d42df388c"} Nov 25 15:05:33 crc kubenswrapper[4806]: I1125 15:05:33.601975 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.317646 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.318871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.331152 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.512221 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.512307 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnrfw\" (UniqueName: \"kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.512472 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.613665 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.613735 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnrfw\" (UniqueName: \"kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.613772 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.614296 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.614333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.646945 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnrfw\" (UniqueName: \"kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw\") pod \"redhat-operators-2x6cn\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:34 crc kubenswrapper[4806]: I1125 15:05:34.939446 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:35 crc kubenswrapper[4806]: I1125 15:05:35.404758 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:35 crc kubenswrapper[4806]: I1125 15:05:35.613348 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerStarted","Data":"ffe1981ab803b691c8a3d019dce4fe929a19d51bb0210faacbda697ff34e3aa4"} Nov 25 15:05:36 crc kubenswrapper[4806]: I1125 15:05:36.621575 4806 generic.go:334] "Generic (PLEG): container finished" podID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerID="52107f3b7651873af1dfbedfab071ca91a0150712b75d3e71dbb507c19a956f3" exitCode=0 Nov 25 15:05:36 crc kubenswrapper[4806]: I1125 15:05:36.621664 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" event={"ID":"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0","Type":"ContainerDied","Data":"52107f3b7651873af1dfbedfab071ca91a0150712b75d3e71dbb507c19a956f3"} Nov 25 15:05:36 crc kubenswrapper[4806]: I1125 15:05:36.627409 4806 generic.go:334] "Generic (PLEG): container finished" podID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerID="6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b" exitCode=0 Nov 25 15:05:36 crc kubenswrapper[4806]: I1125 15:05:36.627468 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerDied","Data":"6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b"} Nov 25 15:05:37 crc kubenswrapper[4806]: I1125 15:05:37.636884 4806 generic.go:334] "Generic (PLEG): container finished" podID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerID="1744ecbc0f52a7e1bed2577ecff9470d20a4e1fb1f16410eb410372183dcd7de" exitCode=0 Nov 25 15:05:37 crc kubenswrapper[4806]: I1125 15:05:37.637517 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" event={"ID":"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0","Type":"ContainerDied","Data":"1744ecbc0f52a7e1bed2577ecff9470d20a4e1fb1f16410eb410372183dcd7de"} Nov 25 15:05:37 crc kubenswrapper[4806]: I1125 15:05:37.640836 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerStarted","Data":"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020"} Nov 25 15:05:38 crc kubenswrapper[4806]: I1125 15:05:38.662430 4806 generic.go:334] "Generic (PLEG): container finished" podID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerID="f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020" exitCode=0 Nov 25 15:05:38 crc kubenswrapper[4806]: I1125 15:05:38.662652 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerDied","Data":"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020"} Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.040269 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.183040 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jms5v\" (UniqueName: \"kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v\") pod \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.183139 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle\") pod \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.183300 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util\") pod \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\" (UID: \"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0\") " Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.186308 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle" (OuterVolumeSpecName: "bundle") pod "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" (UID: "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.191581 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v" (OuterVolumeSpecName: "kube-api-access-jms5v") pod "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" (UID: "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0"). InnerVolumeSpecName "kube-api-access-jms5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.194956 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util" (OuterVolumeSpecName: "util") pod "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" (UID: "eea848bf-e720-4a8e-bcc4-c3ff44ba44c0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.285242 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.285349 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jms5v\" (UniqueName: \"kubernetes.io/projected/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-kube-api-access-jms5v\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.285366 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eea848bf-e720-4a8e-bcc4-c3ff44ba44c0-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.673339 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerStarted","Data":"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6"} Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.677025 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" event={"ID":"eea848bf-e720-4a8e-bcc4-c3ff44ba44c0","Type":"ContainerDied","Data":"19b30215411013a3a14ff4de0beecb456fa63ae9f686099e7788429d42df388c"} Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.677067 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b30215411013a3a14ff4de0beecb456fa63ae9f686099e7788429d42df388c" Nov 25 15:05:39 crc kubenswrapper[4806]: I1125 15:05:39.677147 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2" Nov 25 15:05:40 crc kubenswrapper[4806]: I1125 15:05:40.086815 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2x6cn" podStartSLOduration=3.490677554 podStartE2EDuration="6.086776901s" podCreationTimestamp="2025-11-25 15:05:34 +0000 UTC" firstStartedPulling="2025-11-25 15:05:36.628692586 +0000 UTC m=+769.280834997" lastFinishedPulling="2025-11-25 15:05:39.224791933 +0000 UTC m=+771.876934344" observedRunningTime="2025-11-25 15:05:39.700616284 +0000 UTC m=+772.352758715" watchObservedRunningTime="2025-11-25 15:05:40.086776901 +0000 UTC m=+772.738919322" Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.340771 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wls"] Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341378 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-controller" containerID="cri-o://df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341419 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341519 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-node" containerID="cri-o://97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341582 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-acl-logging" containerID="cri-o://72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341603 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="northd" containerID="cri-o://ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341547 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="sbdb" containerID="cri-o://cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.341462 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="nbdb" containerID="cri-o://3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" gracePeriod=30 Nov 25 15:05:43 crc kubenswrapper[4806]: I1125 15:05:43.385182 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" containerID="cri-o://ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" gracePeriod=30 Nov 25 15:05:44 crc kubenswrapper[4806]: I1125 15:05:44.940360 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:44 crc kubenswrapper[4806]: I1125 15:05:44.940439 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.040245 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.647371 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/3.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.650781 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovn-acl-logging/0.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.651543 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovn-controller/0.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.652198 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.726158 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/2.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.726658 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/1.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.726711 4806 generic.go:334] "Generic (PLEG): container finished" podID="8b7ddd20-62b7-4687-9982-83cf1cbac3ab" containerID="f102e481dfaccdfce5f39caa4beba0d09e366619cf92b1c1314ed49eea807f37" exitCode=2 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.726781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerDied","Data":"f102e481dfaccdfce5f39caa4beba0d09e366619cf92b1c1314ed49eea807f37"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.726841 4806 scope.go:117] "RemoveContainer" containerID="6a4c6d7aeb19206fd79e28c558467bda58d58c4118d27bb9aeb9de68a55a67a8" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.727591 4806 scope.go:117] "RemoveContainer" containerID="f102e481dfaccdfce5f39caa4beba0d09e366619cf92b1c1314ed49eea807f37" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.732189 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovnkube-controller/3.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.734590 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovn-acl-logging/0.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735292 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-69wls_0fff40d8-fd9f-49da-953f-89894b4ef3a1/ovn-controller/0.log" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735834 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735866 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735876 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735887 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735896 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735904 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" exitCode=0 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735913 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" exitCode=143 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735908 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735972 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735985 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736115 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736138 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736152 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736164 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736180 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736188 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736195 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736201 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736207 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736215 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736221 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736227 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736233 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736242 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736252 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.735925 4806 generic.go:334] "Generic (PLEG): container finished" podID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" exitCode=143 Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736260 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736289 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736300 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736308 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736327 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736334 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736341 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736347 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736354 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736380 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736392 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736398 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736405 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736411 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736417 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736423 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736431 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736437 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736444 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736453 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-69wls" event={"ID":"0fff40d8-fd9f-49da-953f-89894b4ef3a1","Type":"ContainerDied","Data":"7567d270b7844c179392c17fcd71a87791e5604bbb7ea656294cc4e6dcc3d82a"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736464 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736472 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736480 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736487 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736495 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736502 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736508 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736515 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736521 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.736528 4806 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.781556 4806 scope.go:117] "RemoveContainer" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.781891 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.781960 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.781994 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782019 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782043 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782074 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782106 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782133 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782155 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782180 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782230 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782253 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782304 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782357 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9lvm\" (UniqueName: \"kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782386 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782409 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782439 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782481 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.782510 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log\") pod \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\" (UID: \"0fff40d8-fd9f-49da-953f-89894b4ef3a1\") " Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.783528 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.783564 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.783590 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.783614 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.783662 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.784129 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.784174 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.784201 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.785659 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.785774 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.789502 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790018 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790063 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log" (OuterVolumeSpecName: "node-log") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790090 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790378 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790421 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket" (OuterVolumeSpecName: "log-socket") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.790461 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash" (OuterVolumeSpecName: "host-slash") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801066 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qpjrq"] Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801374 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801389 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801399 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kubecfg-setup" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801407 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kubecfg-setup" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801420 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801429 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801437 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="util" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801445 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="util" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801458 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-node" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801465 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-node" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801473 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-acl-logging" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801479 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-acl-logging" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801487 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="northd" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801493 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="northd" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801502 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801510 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801520 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801527 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801532 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="nbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801538 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="nbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801547 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="extract" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801553 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="extract" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801564 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801570 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801577 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="sbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801582 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="sbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801593 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801599 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801608 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="pull" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801614 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="pull" Nov 25 15:05:45 crc kubenswrapper[4806]: E1125 15:05:45.801622 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801631 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801747 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-acl-logging" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801760 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-node" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801770 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="sbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801779 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea848bf-e720-4a8e-bcc4-c3ff44ba44c0" containerName="extract" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801786 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801795 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="nbdb" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801801 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801807 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801815 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801821 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="northd" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801831 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovn-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.801838 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.802004 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" containerName="ovnkube-controller" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.811198 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm" (OuterVolumeSpecName: "kube-api-access-r9lvm") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "kube-api-access-r9lvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.811557 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.842550 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0fff40d8-fd9f-49da-953f-89894b4ef3a1" (UID: "0fff40d8-fd9f-49da-953f-89894b4ef3a1"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.834820 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.849884 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.880980 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895278 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-log-socket\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895359 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-systemd-units\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895377 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-etc-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895400 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovn-node-metrics-cert\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895428 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-netd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895479 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-script-lib\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895503 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895523 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-env-overrides\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895559 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.895611 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbc5\" (UniqueName: \"kubernetes.io/projected/dbd029f2-3ca2-42e8-8493-46cee86328bc-kube-api-access-lnbc5\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.896239 4806 scope.go:117] "RemoveContainer" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.896980 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-node-log\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897019 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-slash\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897046 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-ovn\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897061 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-config\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897088 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-var-lib-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897109 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-kubelet\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897164 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-bin\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.897194 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-netns\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.898175 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-systemd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915639 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915696 4806 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915715 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915731 4806 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915744 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9lvm\" (UniqueName: \"kubernetes.io/projected/0fff40d8-fd9f-49da-953f-89894b4ef3a1-kube-api-access-r9lvm\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915757 4806 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915769 4806 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915781 4806 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915796 4806 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915808 4806 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915823 4806 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915836 4806 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915854 4806 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915868 4806 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915879 4806 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0fff40d8-fd9f-49da-953f-89894b4ef3a1-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915891 4806 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915906 4806 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915921 4806 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915933 4806 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.915943 4806 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0fff40d8-fd9f-49da-953f-89894b4ef3a1-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.926516 4806 scope.go:117] "RemoveContainer" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:45 crc kubenswrapper[4806]: I1125 15:05:45.997562 4806 scope.go:117] "RemoveContainer" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.016956 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-env-overrides\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnbc5\" (UniqueName: \"kubernetes.io/projected/dbd029f2-3ca2-42e8-8493-46cee86328bc-kube-api-access-lnbc5\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017077 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-node-log\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017098 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-slash\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017119 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-ovn\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017142 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-config\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-var-lib-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017180 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-kubelet\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017193 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-bin\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017213 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-netns\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017233 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-systemd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-log-socket\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017278 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-systemd-units\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-etc-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017336 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovn-node-metrics-cert\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017381 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-netd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017406 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-script-lib\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017543 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-netns\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017565 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-systemd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017585 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-log-socket\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017603 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-systemd-units\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-etc-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.017640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-bin\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.018404 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-run-ovn-kubernetes\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.018520 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-cni-netd\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.018593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-env-overrides\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.018678 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-slash\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.018726 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019170 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-node-log\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019199 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-script-lib\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019261 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-run-ovn\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019298 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-var-lib-openvswitch\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019346 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/dbd029f2-3ca2-42e8-8493-46cee86328bc-host-kubelet\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.019908 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovnkube-config\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.036084 4806 scope.go:117] "RemoveContainer" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.039119 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/dbd029f2-3ca2-42e8-8493-46cee86328bc-ovn-node-metrics-cert\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.075988 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnbc5\" (UniqueName: \"kubernetes.io/projected/dbd029f2-3ca2-42e8-8493-46cee86328bc-kube-api-access-lnbc5\") pod \"ovnkube-node-qpjrq\" (UID: \"dbd029f2-3ca2-42e8-8493-46cee86328bc\") " pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.077634 4806 scope.go:117] "RemoveContainer" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.111688 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wls"] Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.111950 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-69wls"] Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.119677 4806 scope.go:117] "RemoveContainer" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.160585 4806 scope.go:117] "RemoveContainer" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.187231 4806 scope.go:117] "RemoveContainer" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.212093 4806 scope.go:117] "RemoveContainer" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.212976 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": container with ID starting with ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e not found: ID does not exist" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.213046 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} err="failed to get container status \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": rpc error: code = NotFound desc = could not find container \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": container with ID starting with ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.213106 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.213587 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": container with ID starting with ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368 not found: ID does not exist" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.213640 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} err="failed to get container status \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": rpc error: code = NotFound desc = could not find container \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": container with ID starting with ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.213678 4806 scope.go:117] "RemoveContainer" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.214064 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": container with ID starting with cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327 not found: ID does not exist" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.214125 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} err="failed to get container status \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": rpc error: code = NotFound desc = could not find container \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": container with ID starting with cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.214164 4806 scope.go:117] "RemoveContainer" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.214464 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.214636 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": container with ID starting with 3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58 not found: ID does not exist" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.214668 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} err="failed to get container status \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": rpc error: code = NotFound desc = could not find container \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": container with ID starting with 3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.214686 4806 scope.go:117] "RemoveContainer" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.215013 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": container with ID starting with ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f not found: ID does not exist" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.215040 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} err="failed to get container status \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": rpc error: code = NotFound desc = could not find container \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": container with ID starting with ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.215061 4806 scope.go:117] "RemoveContainer" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.218286 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": container with ID starting with 5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89 not found: ID does not exist" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.218381 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} err="failed to get container status \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": rpc error: code = NotFound desc = could not find container \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": container with ID starting with 5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.218429 4806 scope.go:117] "RemoveContainer" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.221834 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": container with ID starting with 97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010 not found: ID does not exist" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.221910 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} err="failed to get container status \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": rpc error: code = NotFound desc = could not find container \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": container with ID starting with 97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.221962 4806 scope.go:117] "RemoveContainer" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.222445 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": container with ID starting with 72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d not found: ID does not exist" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.222512 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} err="failed to get container status \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": rpc error: code = NotFound desc = could not find container \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": container with ID starting with 72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.222564 4806 scope.go:117] "RemoveContainer" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.225258 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": container with ID starting with df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8 not found: ID does not exist" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.225302 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} err="failed to get container status \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": rpc error: code = NotFound desc = could not find container \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": container with ID starting with df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.225370 4806 scope.go:117] "RemoveContainer" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: E1125 15:05:46.225802 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": container with ID starting with 99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6 not found: ID does not exist" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.225858 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} err="failed to get container status \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": rpc error: code = NotFound desc = could not find container \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": container with ID starting with 99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.225897 4806 scope.go:117] "RemoveContainer" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.226265 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} err="failed to get container status \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": rpc error: code = NotFound desc = could not find container \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": container with ID starting with ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.226291 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230133 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} err="failed to get container status \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": rpc error: code = NotFound desc = could not find container \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": container with ID starting with ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230160 4806 scope.go:117] "RemoveContainer" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230553 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} err="failed to get container status \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": rpc error: code = NotFound desc = could not find container \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": container with ID starting with cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230581 4806 scope.go:117] "RemoveContainer" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230932 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} err="failed to get container status \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": rpc error: code = NotFound desc = could not find container \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": container with ID starting with 3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.230957 4806 scope.go:117] "RemoveContainer" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231216 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} err="failed to get container status \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": rpc error: code = NotFound desc = could not find container \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": container with ID starting with ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231254 4806 scope.go:117] "RemoveContainer" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231494 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} err="failed to get container status \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": rpc error: code = NotFound desc = could not find container \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": container with ID starting with 5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231521 4806 scope.go:117] "RemoveContainer" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231722 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} err="failed to get container status \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": rpc error: code = NotFound desc = could not find container \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": container with ID starting with 97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.231745 4806 scope.go:117] "RemoveContainer" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232158 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} err="failed to get container status \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": rpc error: code = NotFound desc = could not find container \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": container with ID starting with 72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232182 4806 scope.go:117] "RemoveContainer" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232498 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} err="failed to get container status \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": rpc error: code = NotFound desc = could not find container \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": container with ID starting with df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232526 4806 scope.go:117] "RemoveContainer" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232711 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} err="failed to get container status \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": rpc error: code = NotFound desc = could not find container \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": container with ID starting with 99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232734 4806 scope.go:117] "RemoveContainer" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232903 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} err="failed to get container status \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": rpc error: code = NotFound desc = could not find container \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": container with ID starting with ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.232922 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233153 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} err="failed to get container status \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": rpc error: code = NotFound desc = could not find container \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": container with ID starting with ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233177 4806 scope.go:117] "RemoveContainer" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233415 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} err="failed to get container status \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": rpc error: code = NotFound desc = could not find container \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": container with ID starting with cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233435 4806 scope.go:117] "RemoveContainer" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233601 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} err="failed to get container status \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": rpc error: code = NotFound desc = could not find container \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": container with ID starting with 3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233621 4806 scope.go:117] "RemoveContainer" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233782 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} err="failed to get container status \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": rpc error: code = NotFound desc = could not find container \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": container with ID starting with ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233802 4806 scope.go:117] "RemoveContainer" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233958 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} err="failed to get container status \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": rpc error: code = NotFound desc = could not find container \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": container with ID starting with 5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.233978 4806 scope.go:117] "RemoveContainer" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.234130 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} err="failed to get container status \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": rpc error: code = NotFound desc = could not find container \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": container with ID starting with 97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.234149 4806 scope.go:117] "RemoveContainer" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.234305 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} err="failed to get container status \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": rpc error: code = NotFound desc = could not find container \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": container with ID starting with 72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.234340 4806 scope.go:117] "RemoveContainer" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.238777 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} err="failed to get container status \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": rpc error: code = NotFound desc = could not find container \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": container with ID starting with df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.238809 4806 scope.go:117] "RemoveContainer" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.242515 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} err="failed to get container status \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": rpc error: code = NotFound desc = could not find container \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": container with ID starting with 99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.242581 4806 scope.go:117] "RemoveContainer" containerID="ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e" Nov 25 15:05:46 crc kubenswrapper[4806]: W1125 15:05:46.244724 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbd029f2_3ca2_42e8_8493_46cee86328bc.slice/crio-b2b06a3b5eae932bcf66d57436de424832cd8f141ed511baa6ad47760021d4e6 WatchSource:0}: Error finding container b2b06a3b5eae932bcf66d57436de424832cd8f141ed511baa6ad47760021d4e6: Status 404 returned error can't find the container with id b2b06a3b5eae932bcf66d57436de424832cd8f141ed511baa6ad47760021d4e6 Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.245307 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e"} err="failed to get container status \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": rpc error: code = NotFound desc = could not find container \"ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e\": container with ID starting with ecd3ec59e324990de76ee29bba1040ffd60c6d31c080972f7915e52c9a63770e not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.245428 4806 scope.go:117] "RemoveContainer" containerID="ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.249670 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368"} err="failed to get container status \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": rpc error: code = NotFound desc = could not find container \"ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368\": container with ID starting with ae1b49fe171571509d8fd7d94ba703e20354f20445a4d493b22eb1d6a1649368 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.249722 4806 scope.go:117] "RemoveContainer" containerID="cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.255675 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327"} err="failed to get container status \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": rpc error: code = NotFound desc = could not find container \"cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327\": container with ID starting with cba2be3e26d5cb0dc52245ef75ebd2cf9772efdb362afc704fc5856a794f3327 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.255741 4806 scope.go:117] "RemoveContainer" containerID="3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.259249 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58"} err="failed to get container status \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": rpc error: code = NotFound desc = could not find container \"3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58\": container with ID starting with 3beb94be42f675759c0006d245ad0206dee42f2769fb0acba67309a95cdddb58 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.259290 4806 scope.go:117] "RemoveContainer" containerID="ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.262141 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f"} err="failed to get container status \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": rpc error: code = NotFound desc = could not find container \"ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f\": container with ID starting with ad12879dbf73d93d2bf32218087b52911a0aaae98d4f46fb0324eefe8129021f not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.262194 4806 scope.go:117] "RemoveContainer" containerID="5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.263935 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89"} err="failed to get container status \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": rpc error: code = NotFound desc = could not find container \"5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89\": container with ID starting with 5ca689f896f66b4e01921f300896b5a49ae882b2fa281ac21c8737d288f1bb89 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.263996 4806 scope.go:117] "RemoveContainer" containerID="97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.264289 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010"} err="failed to get container status \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": rpc error: code = NotFound desc = could not find container \"97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010\": container with ID starting with 97ebd13ae5624605ad79d3644e89ddd5e10d6e63d1e9e7868f41dcf75356f010 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.264345 4806 scope.go:117] "RemoveContainer" containerID="72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.265974 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d"} err="failed to get container status \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": rpc error: code = NotFound desc = could not find container \"72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d\": container with ID starting with 72be47cbf1453a785f78bdcf9e1d08377dd6a1eb7ddcf59e0247f18f3a9e418d not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.266001 4806 scope.go:117] "RemoveContainer" containerID="df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.266207 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8"} err="failed to get container status \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": rpc error: code = NotFound desc = could not find container \"df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8\": container with ID starting with df6e8810d779aa2db9bc490cf4b29894e436416a2f50495bd90fc94b3a1223a8 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.266226 4806 scope.go:117] "RemoveContainer" containerID="99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.266499 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6"} err="failed to get container status \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": rpc error: code = NotFound desc = could not find container \"99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6\": container with ID starting with 99fbee630e6774d7a656603b36c2977a0e6903b8323ee58babc8ce957002a9e6 not found: ID does not exist" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.745194 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-mwdqt_8b7ddd20-62b7-4687-9982-83cf1cbac3ab/kube-multus/2.log" Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.745333 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-mwdqt" event={"ID":"8b7ddd20-62b7-4687-9982-83cf1cbac3ab","Type":"ContainerStarted","Data":"472badfbf2bb00834d0e6c137d6ee9254bef2610d1abd450b2a9474bef543ba9"} Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.747870 4806 generic.go:334] "Generic (PLEG): container finished" podID="dbd029f2-3ca2-42e8-8493-46cee86328bc" containerID="07df83a2eba99e998e8450062edd2064bfb2915a9780cd7ef0882a4bdbe59e19" exitCode=0 Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.747946 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerDied","Data":"07df83a2eba99e998e8450062edd2064bfb2915a9780cd7ef0882a4bdbe59e19"} Nov 25 15:05:46 crc kubenswrapper[4806]: I1125 15:05:46.747993 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"b2b06a3b5eae932bcf66d57436de424832cd8f141ed511baa6ad47760021d4e6"} Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.323365 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.759852 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"e5d4e05bffe4d6f7ff9d10726638c173b53454a871734a5d5facb73a65c5e299"} Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.760087 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"913fa11a0c68f737e26d3a46340c653197f2cc1b88bb9a278d694426f76494ff"} Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.760106 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"391a43f8262e69a45461a2ac5334366fe082279b693517895a2ec71c6a660d73"} Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.760096 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2x6cn" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="registry-server" containerID="cri-o://8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6" gracePeriod=2 Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.760118 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"4c1e3c0398151047732b70d135f2901b669dbe3d97dd0a602ebf96bbb0c4423b"} Nov 25 15:05:47 crc kubenswrapper[4806]: I1125 15:05:47.760222 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"285992e0ea95a10a7b144e6a2af060c28bedd14a8ef01746e9929f2883358e85"} Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.208833 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fff40d8-fd9f-49da-953f-89894b4ef3a1" path="/var/lib/kubelet/pods/0fff40d8-fd9f-49da-953f-89894b4ef3a1/volumes" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.220088 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.257628 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content\") pod \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.257754 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities\") pod \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.257810 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnrfw\" (UniqueName: \"kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw\") pod \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\" (UID: \"2541bf92-f78f-4d3a-8000-1a8ca4e90593\") " Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.260430 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities" (OuterVolumeSpecName: "utilities") pod "2541bf92-f78f-4d3a-8000-1a8ca4e90593" (UID: "2541bf92-f78f-4d3a-8000-1a8ca4e90593"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.266591 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw" (OuterVolumeSpecName: "kube-api-access-tnrfw") pod "2541bf92-f78f-4d3a-8000-1a8ca4e90593" (UID: "2541bf92-f78f-4d3a-8000-1a8ca4e90593"). InnerVolumeSpecName "kube-api-access-tnrfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.358937 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.358984 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnrfw\" (UniqueName: \"kubernetes.io/projected/2541bf92-f78f-4d3a-8000-1a8ca4e90593-kube-api-access-tnrfw\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.467490 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2541bf92-f78f-4d3a-8000-1a8ca4e90593" (UID: "2541bf92-f78f-4d3a-8000-1a8ca4e90593"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.564020 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2541bf92-f78f-4d3a-8000-1a8ca4e90593-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.775704 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"9cd931533e06312b9bef707f2e08a2190dbc77e0138e6f89bc4a670dce4b7dc1"} Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.784282 4806 generic.go:334] "Generic (PLEG): container finished" podID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerID="8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6" exitCode=0 Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.784362 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerDied","Data":"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6"} Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.784406 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2x6cn" event={"ID":"2541bf92-f78f-4d3a-8000-1a8ca4e90593","Type":"ContainerDied","Data":"ffe1981ab803b691c8a3d019dce4fe929a19d51bb0210faacbda697ff34e3aa4"} Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.784437 4806 scope.go:117] "RemoveContainer" containerID="8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.784608 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2x6cn" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.818576 4806 scope.go:117] "RemoveContainer" containerID="f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.844141 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.844223 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2x6cn"] Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.851059 4806 scope.go:117] "RemoveContainer" containerID="6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.877940 4806 scope.go:117] "RemoveContainer" containerID="8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6" Nov 25 15:05:48 crc kubenswrapper[4806]: E1125 15:05:48.878934 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6\": container with ID starting with 8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6 not found: ID does not exist" containerID="8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.879031 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6"} err="failed to get container status \"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6\": rpc error: code = NotFound desc = could not find container \"8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6\": container with ID starting with 8c84b40b1b5f5bb4b794b124d35cc04ec6f7582babf395fb541ecc27727052b6 not found: ID does not exist" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.879325 4806 scope.go:117] "RemoveContainer" containerID="f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020" Nov 25 15:05:48 crc kubenswrapper[4806]: E1125 15:05:48.879910 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020\": container with ID starting with f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020 not found: ID does not exist" containerID="f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.879949 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020"} err="failed to get container status \"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020\": rpc error: code = NotFound desc = could not find container \"f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020\": container with ID starting with f455c3736eebd159669204b591eaa14b70c2a03a6f02ea59c47bd117d79eb020 not found: ID does not exist" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.879966 4806 scope.go:117] "RemoveContainer" containerID="6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b" Nov 25 15:05:48 crc kubenswrapper[4806]: E1125 15:05:48.880308 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b\": container with ID starting with 6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b not found: ID does not exist" containerID="6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b" Nov 25 15:05:48 crc kubenswrapper[4806]: I1125 15:05:48.880419 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b"} err="failed to get container status \"6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b\": rpc error: code = NotFound desc = could not find container \"6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b\": container with ID starting with 6fb737d181672bc079b9c1e35efad99fa6efdca1b59491891cb8f91bb556f00b not found: ID does not exist" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.590123 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq"] Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.590899 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="extract-utilities" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.590916 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="extract-utilities" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.590923 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="extract-content" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.590930 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="extract-content" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.590941 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="registry-server" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.590947 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="registry-server" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.591060 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" containerName="registry-server" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.591624 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.594909 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.594964 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-snqq2" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.599970 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.690843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/380a6ec0-8579-4cf8-bd81-52186962d2ed-kube-api-access-g4xdk\") pod \"obo-prometheus-operator-668cf9dfbb-2s9fq\" (UID: \"380a6ec0-8579-4cf8-bd81-52186962d2ed\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.722292 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j"] Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.723203 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:49 crc kubenswrapper[4806]: W1125 15:05:49.727742 4806 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-service-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.727815 4806 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-service-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 15:05:49 crc kubenswrapper[4806]: W1125 15:05:49.727826 4806 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fjrfv": failed to list *v1.Secret: secrets "obo-prometheus-operator-admission-webhook-dockercfg-fjrfv" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-operators": no relationship found between node 'crc' and this object Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.727917 4806 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-fjrfv\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"obo-prometheus-operator-admission-webhook-dockercfg-fjrfv\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.740216 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr"] Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.741163 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.791896 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.791952 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/380a6ec0-8579-4cf8-bd81-52186962d2ed-kube-api-access-g4xdk\") pod \"obo-prometheus-operator-668cf9dfbb-2s9fq\" (UID: \"380a6ec0-8579-4cf8-bd81-52186962d2ed\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.792064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.792087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.792118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.826905 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/380a6ec0-8579-4cf8-bd81-52186962d2ed-kube-api-access-g4xdk\") pod \"obo-prometheus-operator-668cf9dfbb-2s9fq\" (UID: \"380a6ec0-8579-4cf8-bd81-52186962d2ed\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.893235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.893301 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.893353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.893383 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.908861 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.913547 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-rtq62"] Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.914650 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.920494 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kcgrl" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.921153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.961781 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(76f1b157e887fa5a061f4d3ca8a4ce391ac7be6aac778673d7d19cfdb14f593f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.961888 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(76f1b157e887fa5a061f4d3ca8a4ce391ac7be6aac778673d7d19cfdb14f593f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.961927 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(76f1b157e887fa5a061f4d3ca8a4ce391ac7be6aac778673d7d19cfdb14f593f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:49 crc kubenswrapper[4806]: E1125 15:05:49.961993 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators(380a6ec0-8579-4cf8-bd81-52186962d2ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators(380a6ec0-8579-4cf8-bd81-52186962d2ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(76f1b157e887fa5a061f4d3ca8a4ce391ac7be6aac778673d7d19cfdb14f593f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" podUID="380a6ec0-8579-4cf8-bd81-52186962d2ed" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.994488 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz9kr\" (UniqueName: \"kubernetes.io/projected/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-kube-api-access-nz9kr\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:49 crc kubenswrapper[4806]: I1125 15:05:49.994644 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.096021 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz9kr\" (UniqueName: \"kubernetes.io/projected/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-kube-api-access-nz9kr\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.096191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.099890 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2541bf92-f78f-4d3a-8000-1a8ca4e90593" path="/var/lib/kubelet/pods/2541bf92-f78f-4d3a-8000-1a8ca4e90593/volumes" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.108240 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.116359 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz9kr\" (UniqueName: \"kubernetes.io/projected/2f9ca963-6005-48e0-9d0b-7e1c3dc7103e-kube-api-access-nz9kr\") pod \"observability-operator-d8bb48f5d-rtq62\" (UID: \"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e\") " pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.135857 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dklz7"] Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.137716 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.140399 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jmnzj" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.197184 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zslkk\" (UniqueName: \"kubernetes.io/projected/7fb6f239-ec10-48bd-bd37-c1afa567e809-kube-api-access-zslkk\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.197344 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fb6f239-ec10-48bd-bd37-c1afa567e809-openshift-service-ca\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.295694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.298823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fb6f239-ec10-48bd-bd37-c1afa567e809-openshift-service-ca\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.298908 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zslkk\" (UniqueName: \"kubernetes.io/projected/7fb6f239-ec10-48bd-bd37-c1afa567e809-kube-api-access-zslkk\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.299934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/7fb6f239-ec10-48bd-bd37-c1afa567e809-openshift-service-ca\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.317293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zslkk\" (UniqueName: \"kubernetes.io/projected/7fb6f239-ec10-48bd-bd37-c1afa567e809-kube-api-access-zslkk\") pod \"perses-operator-5446b9c989-dklz7\" (UID: \"7fb6f239-ec10-48bd-bd37-c1afa567e809\") " pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.329279 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(96136a99d48678c04322d6784341e7f00dffe2bbae00173252a0191297d238b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.329393 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(96136a99d48678c04322d6784341e7f00dffe2bbae00173252a0191297d238b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.329427 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(96136a99d48678c04322d6784341e7f00dffe2bbae00173252a0191297d238b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.329502 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-rtq62_openshift-operators(2f9ca963-6005-48e0-9d0b-7e1c3dc7103e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-rtq62_openshift-operators(2f9ca963-6005-48e0-9d0b-7e1c3dc7103e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(96136a99d48678c04322d6784341e7f00dffe2bbae00173252a0191297d238b8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" podUID="2f9ca963-6005-48e0-9d0b-7e1c3dc7103e" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.463560 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.491153 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(5ad576d477e5c3361285149b84c0f785868fa2d748f68c6693d251db9ba1e504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.491551 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(5ad576d477e5c3361285149b84c0f785868fa2d748f68c6693d251db9ba1e504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.491580 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(5ad576d477e5c3361285149b84c0f785868fa2d748f68c6693d251db9ba1e504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.491638 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-dklz7_openshift-operators(7fb6f239-ec10-48bd-bd37-c1afa567e809)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-dklz7_openshift-operators(7fb6f239-ec10-48bd-bd37-c1afa567e809)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(5ad576d477e5c3361285149b84c0f785868fa2d748f68c6693d251db9ba1e504): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-dklz7" podUID="7fb6f239-ec10-48bd-bd37-c1afa567e809" Nov 25 15:05:50 crc kubenswrapper[4806]: I1125 15:05:50.801893 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"802bafc0a2b2a066b3df2c567423d7adc2ebfcdbda3dc6d603df6a24790d5ced"} Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.893971 4806 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894041 4806 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894078 4806 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894099 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert podName:028bbcd6-a8e8-470e-b603-6f7a1a68152d nodeName:}" failed. No retries permitted until 2025-11-25 15:05:51.394074859 +0000 UTC m=+784.046217270 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert") pod "obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" (UID: "028bbcd6-a8e8-470e-b603-6f7a1a68152d") : failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894110 4806 secret.go:188] Couldn't get secret openshift-operators/obo-prometheus-operator-admission-webhook-service-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894274 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert podName:85f5c34a-cdb4-41c5-8a01-766f57f85a0a nodeName:}" failed. No retries permitted until 2025-11-25 15:05:51.394214163 +0000 UTC m=+784.046356724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" (UID: "85f5c34a-cdb4-41c5-8a01-766f57f85a0a") : failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894306 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert podName:028bbcd6-a8e8-470e-b603-6f7a1a68152d nodeName:}" failed. No retries permitted until 2025-11-25 15:05:51.394291495 +0000 UTC m=+784.046434126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert") pod "obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" (UID: "028bbcd6-a8e8-470e-b603-6f7a1a68152d") : failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:50 crc kubenswrapper[4806]: E1125 15:05:50.894360 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert podName:85f5c34a-cdb4-41c5-8a01-766f57f85a0a nodeName:}" failed. No retries permitted until 2025-11-25 15:05:51.394350637 +0000 UTC m=+784.046493228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert") pod "obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" (UID: "85f5c34a-cdb4-41c5-8a01-766f57f85a0a") : failed to sync secret cache: timed out waiting for the condition Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.054620 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fjrfv" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.143094 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.425094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.425226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.425262 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.425293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.431676 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.432614 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/028bbcd6-a8e8-470e-b603-6f7a1a68152d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr\" (UID: \"028bbcd6-a8e8-470e-b603-6f7a1a68152d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.446774 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.447719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/85f5c34a-cdb4-41c5-8a01-766f57f85a0a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j\" (UID: \"85f5c34a-cdb4-41c5-8a01-766f57f85a0a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.542975 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: I1125 15:05:51.567824 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.639132 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(4043690d543b87e3b87cab45c2eede4704f9c28f5648c8c42a76e0b1de796711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.639229 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(4043690d543b87e3b87cab45c2eede4704f9c28f5648c8c42a76e0b1de796711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.639262 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(4043690d543b87e3b87cab45c2eede4704f9c28f5648c8c42a76e0b1de796711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.639343 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators(85f5c34a-cdb4-41c5-8a01-766f57f85a0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators(85f5c34a-cdb4-41c5-8a01-766f57f85a0a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(4043690d543b87e3b87cab45c2eede4704f9c28f5648c8c42a76e0b1de796711): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" podUID="85f5c34a-cdb4-41c5-8a01-766f57f85a0a" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.657281 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(dfddb2d821c9ff66adcd3b5421b4e4642d80e15e0bb62b3655902661ef229821): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.657415 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(dfddb2d821c9ff66adcd3b5421b4e4642d80e15e0bb62b3655902661ef229821): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.657450 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(dfddb2d821c9ff66adcd3b5421b4e4642d80e15e0bb62b3655902661ef229821): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:51 crc kubenswrapper[4806]: E1125 15:05:51.657523 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators(028bbcd6-a8e8-470e-b603-6f7a1a68152d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators(028bbcd6-a8e8-470e-b603-6f7a1a68152d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(dfddb2d821c9ff66adcd3b5421b4e4642d80e15e0bb62b3655902661ef229821): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" podUID="028bbcd6-a8e8-470e-b603-6f7a1a68152d" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.832112 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" event={"ID":"dbd029f2-3ca2-42e8-8493-46cee86328bc","Type":"ContainerStarted","Data":"e2a2d8c03e6404c2b20e393ac8ae70aec9617dff834c128d80b718156340ff63"} Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.833288 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.833426 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.833576 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.853843 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j"] Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.854051 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.854744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.865868 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq"] Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.866068 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.866715 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.872867 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-rtq62"] Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.873017 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.875085 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.884533 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" podStartSLOduration=9.884502924 podStartE2EDuration="9.884502924s" podCreationTimestamp="2025-11-25 15:05:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:05:54.875348436 +0000 UTC m=+787.527490867" watchObservedRunningTime="2025-11-25 15:05:54.884502924 +0000 UTC m=+787.536645335" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.886597 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr"] Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.886793 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.887446 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.904625 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.905700 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.911525 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dklz7"] Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.911690 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:54 crc kubenswrapper[4806]: I1125 15:05:54.912346 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.977900 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(58c1772e57f4f455f1a5693c13e5ee3f34f50f46f16b63d0c970b436bd013392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.977977 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(58c1772e57f4f455f1a5693c13e5ee3f34f50f46f16b63d0c970b436bd013392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.978011 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(58c1772e57f4f455f1a5693c13e5ee3f34f50f46f16b63d0c970b436bd013392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.978068 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators(85f5c34a-cdb4-41c5-8a01-766f57f85a0a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators(85f5c34a-cdb4-41c5-8a01-766f57f85a0a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_openshift-operators_85f5c34a-cdb4-41c5-8a01-766f57f85a0a_0(58c1772e57f4f455f1a5693c13e5ee3f34f50f46f16b63d0c970b436bd013392): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" podUID="85f5c34a-cdb4-41c5-8a01-766f57f85a0a" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.989325 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(d8e91eb0b25e7361cc88e0630496b63445d13c27e32c10d58f938bb82904e9d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.989421 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(d8e91eb0b25e7361cc88e0630496b63445d13c27e32c10d58f938bb82904e9d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.989459 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(d8e91eb0b25e7361cc88e0630496b63445d13c27e32c10d58f938bb82904e9d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:05:54 crc kubenswrapper[4806]: E1125 15:05:54.989525 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators(380a6ec0-8579-4cf8-bd81-52186962d2ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators(380a6ec0-8579-4cf8-bd81-52186962d2ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators_380a6ec0-8579-4cf8-bd81-52186962d2ed_0(d8e91eb0b25e7361cc88e0630496b63445d13c27e32c10d58f938bb82904e9d6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" podUID="380a6ec0-8579-4cf8-bd81-52186962d2ed" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010178 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(013c8c28584989ea283cf81bed8eef2071ba63b6bed8a049c692a4fccef23f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010277 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(013c8c28584989ea283cf81bed8eef2071ba63b6bed8a049c692a4fccef23f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010303 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(013c8c28584989ea283cf81bed8eef2071ba63b6bed8a049c692a4fccef23f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010374 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-d8bb48f5d-rtq62_openshift-operators(2f9ca963-6005-48e0-9d0b-7e1c3dc7103e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-d8bb48f5d-rtq62_openshift-operators(2f9ca963-6005-48e0-9d0b-7e1c3dc7103e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-d8bb48f5d-rtq62_openshift-operators_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e_0(013c8c28584989ea283cf81bed8eef2071ba63b6bed8a049c692a4fccef23f7d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" podUID="2f9ca963-6005-48e0-9d0b-7e1c3dc7103e" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010716 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(5d5b2d85bf1a6dd90536db96dfd14d0bd122a6e3d4022fe676b9b3c1fc998637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010750 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(5d5b2d85bf1a6dd90536db96dfd14d0bd122a6e3d4022fe676b9b3c1fc998637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010771 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(5d5b2d85bf1a6dd90536db96dfd14d0bd122a6e3d4022fe676b9b3c1fc998637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.010821 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators(028bbcd6-a8e8-470e-b603-6f7a1a68152d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators(028bbcd6-a8e8-470e-b603-6f7a1a68152d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_openshift-operators_028bbcd6-a8e8-470e-b603-6f7a1a68152d_0(5d5b2d85bf1a6dd90536db96dfd14d0bd122a6e3d4022fe676b9b3c1fc998637): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" podUID="028bbcd6-a8e8-470e-b603-6f7a1a68152d" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.016390 4806 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(d8248df1bc5444a8a6fbc3152c988ad75f3147bcdaec741b8bf6cce1cfb6f54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.016445 4806 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(d8248df1bc5444a8a6fbc3152c988ad75f3147bcdaec741b8bf6cce1cfb6f54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.016466 4806 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(d8248df1bc5444a8a6fbc3152c988ad75f3147bcdaec741b8bf6cce1cfb6f54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:05:55 crc kubenswrapper[4806]: E1125 15:05:55.016514 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5446b9c989-dklz7_openshift-operators(7fb6f239-ec10-48bd-bd37-c1afa567e809)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5446b9c989-dklz7_openshift-operators(7fb6f239-ec10-48bd-bd37-c1afa567e809)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5446b9c989-dklz7_openshift-operators_7fb6f239-ec10-48bd-bd37-c1afa567e809_0(d8248df1bc5444a8a6fbc3152c988ad75f3147bcdaec741b8bf6cce1cfb6f54b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5446b9c989-dklz7" podUID="7fb6f239-ec10-48bd-bd37-c1afa567e809" Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.089342 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.089424 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.090924 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.091073 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.579360 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dklz7"] Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.582248 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j"] Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.910475 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" event={"ID":"85f5c34a-cdb4-41c5-8a01-766f57f85a0a","Type":"ContainerStarted","Data":"4c465776b01a40bc2d7fc78fecf9622dac45fd69af2a5dd6111939a5f0d924b7"} Nov 25 15:06:06 crc kubenswrapper[4806]: I1125 15:06:06.911731 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dklz7" event={"ID":"7fb6f239-ec10-48bd-bd37-c1afa567e809","Type":"ContainerStarted","Data":"faac804e45ec0ad7807d0195ca03134a02ceb05fd9c3f7400aab41f65b998799"} Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.088941 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.089005 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.089626 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.089626 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.481188 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-rtq62"] Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.527885 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq"] Nov 25 15:06:07 crc kubenswrapper[4806]: W1125 15:06:07.549655 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod380a6ec0_8579_4cf8_bd81_52186962d2ed.slice/crio-15cb3da7c2d82f5151e3a250f00f3edcb3871e58101b6ffe2ca2024fac352f69 WatchSource:0}: Error finding container 15cb3da7c2d82f5151e3a250f00f3edcb3871e58101b6ffe2ca2024fac352f69: Status 404 returned error can't find the container with id 15cb3da7c2d82f5151e3a250f00f3edcb3871e58101b6ffe2ca2024fac352f69 Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.921368 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" event={"ID":"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e","Type":"ContainerStarted","Data":"ffd152fbc2030f092bd9d665513682a034321ccb605bffed506b9c5e1567057a"} Nov 25 15:06:07 crc kubenswrapper[4806]: I1125 15:06:07.923134 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" event={"ID":"380a6ec0-8579-4cf8-bd81-52186962d2ed","Type":"ContainerStarted","Data":"15cb3da7c2d82f5151e3a250f00f3edcb3871e58101b6ffe2ca2024fac352f69"} Nov 25 15:06:09 crc kubenswrapper[4806]: I1125 15:06:09.090639 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:06:09 crc kubenswrapper[4806]: I1125 15:06:09.091434 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" Nov 25 15:06:09 crc kubenswrapper[4806]: I1125 15:06:09.603909 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr"] Nov 25 15:06:09 crc kubenswrapper[4806]: I1125 15:06:09.944166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" event={"ID":"028bbcd6-a8e8-470e-b603-6f7a1a68152d","Type":"ContainerStarted","Data":"bfbd547312e4b69a0e6ca02ff9353a617e1b6bc04e62fa13ed090f756bf72539"} Nov 25 15:06:16 crc kubenswrapper[4806]: I1125 15:06:16.240354 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qpjrq" Nov 25 15:06:18 crc kubenswrapper[4806]: I1125 15:06:18.935722 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:06:18 crc kubenswrapper[4806]: I1125 15:06:18.936368 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:06:24 crc kubenswrapper[4806]: E1125 15:06:24.591187 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Nov 25 15:06:24 crc kubenswrapper[4806]: E1125 15:06:24.591763 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g4xdk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-2s9fq_openshift-operators(380a6ec0-8579-4cf8-bd81-52186962d2ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:06:24 crc kubenswrapper[4806]: E1125 15:06:24.593179 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" podUID="380a6ec0-8579-4cf8-bd81-52186962d2ed" Nov 25 15:06:25 crc kubenswrapper[4806]: E1125 15:06:25.103922 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" podUID="380a6ec0-8579-4cf8-bd81-52186962d2ed" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.078502 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dklz7" event={"ID":"7fb6f239-ec10-48bd-bd37-c1afa567e809","Type":"ContainerStarted","Data":"a846e39d431ba8d79b168db39a7f52a1710411815bd0cd2f068aa224b7fa335e"} Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.078682 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.080148 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" event={"ID":"2f9ca963-6005-48e0-9d0b-7e1c3dc7103e","Type":"ContainerStarted","Data":"135a229eb279ec3ba2bf6df3653bb02f55634f325fbd8438ce6522ca1ee3b94f"} Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.082930 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.084924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" event={"ID":"85f5c34a-cdb4-41c5-8a01-766f57f85a0a","Type":"ContainerStarted","Data":"832b4b8555d7dfdbaee416b22b1f3d796aa1a1c9b7a204980c6592569b90080b"} Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.086948 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" event={"ID":"028bbcd6-a8e8-470e-b603-6f7a1a68152d","Type":"ContainerStarted","Data":"d90b8a10c55587e2effacee13a43decb2c8288d90e6dfb4ec865ce2eed17615e"} Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.100165 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-dklz7" podStartSLOduration=17.547146161 podStartE2EDuration="36.100136917s" podCreationTimestamp="2025-11-25 15:05:50 +0000 UTC" firstStartedPulling="2025-11-25 15:06:06.592068484 +0000 UTC m=+799.244210895" lastFinishedPulling="2025-11-25 15:06:25.14505924 +0000 UTC m=+817.797201651" observedRunningTime="2025-11-25 15:06:26.096959877 +0000 UTC m=+818.749102298" watchObservedRunningTime="2025-11-25 15:06:26.100136917 +0000 UTC m=+818.752279328" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.123477 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" podStartSLOduration=19.41634749 podStartE2EDuration="37.123447703s" podCreationTimestamp="2025-11-25 15:05:49 +0000 UTC" firstStartedPulling="2025-11-25 15:06:07.499633123 +0000 UTC m=+800.151775534" lastFinishedPulling="2025-11-25 15:06:25.206733336 +0000 UTC m=+817.858875747" observedRunningTime="2025-11-25 15:06:26.120299714 +0000 UTC m=+818.772442145" watchObservedRunningTime="2025-11-25 15:06:26.123447703 +0000 UTC m=+818.775590114" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.143126 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j" podStartSLOduration=18.565044094 podStartE2EDuration="37.143097916s" podCreationTimestamp="2025-11-25 15:05:49 +0000 UTC" firstStartedPulling="2025-11-25 15:06:06.592124285 +0000 UTC m=+799.244266696" lastFinishedPulling="2025-11-25 15:06:25.170178117 +0000 UTC m=+817.822320518" observedRunningTime="2025-11-25 15:06:26.140803711 +0000 UTC m=+818.792946132" watchObservedRunningTime="2025-11-25 15:06:26.143097916 +0000 UTC m=+818.795240327" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.153525 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-rtq62" Nov 25 15:06:26 crc kubenswrapper[4806]: I1125 15:06:26.208449 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr" podStartSLOduration=21.718453346 podStartE2EDuration="37.208420405s" podCreationTimestamp="2025-11-25 15:05:49 +0000 UTC" firstStartedPulling="2025-11-25 15:06:09.660504494 +0000 UTC m=+802.312646905" lastFinishedPulling="2025-11-25 15:06:25.150471553 +0000 UTC m=+817.802613964" observedRunningTime="2025-11-25 15:06:26.205887694 +0000 UTC m=+818.858030105" watchObservedRunningTime="2025-11-25 15:06:26.208420405 +0000 UTC m=+818.860562816" Nov 25 15:06:30 crc kubenswrapper[4806]: I1125 15:06:30.466869 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-dklz7" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.552135 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mw4xn"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.553682 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.556983 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zj8g8" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.561821 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mw4xn"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.563111 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.563490 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.576365 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2nhx4"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.577563 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.581194 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fd226" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.617680 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jssct"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.619018 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.624711 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-d678x" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.625425 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2nhx4"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.649804 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jssct"] Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.734131 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbkz\" (UniqueName: \"kubernetes.io/projected/95b3b0c2-b552-4f25-803e-f2ae9d53add8-kube-api-access-zgbkz\") pod \"cert-manager-5b446d88c5-2nhx4\" (UID: \"95b3b0c2-b552-4f25-803e-f2ae9d53add8\") " pod="cert-manager/cert-manager-5b446d88c5-2nhx4" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.734458 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjdc\" (UniqueName: \"kubernetes.io/projected/672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae-kube-api-access-qpjdc\") pod \"cert-manager-webhook-5655c58dd6-jssct\" (UID: \"672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.734551 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29ht\" (UniqueName: \"kubernetes.io/projected/9914c048-9845-4535-97d5-2833b53b84d3-kube-api-access-v29ht\") pod \"cert-manager-cainjector-7f985d654d-mw4xn\" (UID: \"9914c048-9845-4535-97d5-2833b53b84d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.835936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbkz\" (UniqueName: \"kubernetes.io/projected/95b3b0c2-b552-4f25-803e-f2ae9d53add8-kube-api-access-zgbkz\") pod \"cert-manager-5b446d88c5-2nhx4\" (UID: \"95b3b0c2-b552-4f25-803e-f2ae9d53add8\") " pod="cert-manager/cert-manager-5b446d88c5-2nhx4" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.836043 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjdc\" (UniqueName: \"kubernetes.io/projected/672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae-kube-api-access-qpjdc\") pod \"cert-manager-webhook-5655c58dd6-jssct\" (UID: \"672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.836075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29ht\" (UniqueName: \"kubernetes.io/projected/9914c048-9845-4535-97d5-2833b53b84d3-kube-api-access-v29ht\") pod \"cert-manager-cainjector-7f985d654d-mw4xn\" (UID: \"9914c048-9845-4535-97d5-2833b53b84d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.858602 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjdc\" (UniqueName: \"kubernetes.io/projected/672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae-kube-api-access-qpjdc\") pod \"cert-manager-webhook-5655c58dd6-jssct\" (UID: \"672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.858682 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29ht\" (UniqueName: \"kubernetes.io/projected/9914c048-9845-4535-97d5-2833b53b84d3-kube-api-access-v29ht\") pod \"cert-manager-cainjector-7f985d654d-mw4xn\" (UID: \"9914c048-9845-4535-97d5-2833b53b84d3\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.862434 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbkz\" (UniqueName: \"kubernetes.io/projected/95b3b0c2-b552-4f25-803e-f2ae9d53add8-kube-api-access-zgbkz\") pod \"cert-manager-5b446d88c5-2nhx4\" (UID: \"95b3b0c2-b552-4f25-803e-f2ae9d53add8\") " pod="cert-manager/cert-manager-5b446d88c5-2nhx4" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.885256 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.923923 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" Nov 25 15:06:35 crc kubenswrapper[4806]: I1125 15:06:35.949756 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:36 crc kubenswrapper[4806]: I1125 15:06:36.410503 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-mw4xn"] Nov 25 15:06:36 crc kubenswrapper[4806]: I1125 15:06:36.427408 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-jssct"] Nov 25 15:06:36 crc kubenswrapper[4806]: W1125 15:06:36.430666 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9914c048_9845_4535_97d5_2833b53b84d3.slice/crio-4a08218f7e6cdc710d2c01447d7e1189512465cd51d99258b43f99d85cf8c38d WatchSource:0}: Error finding container 4a08218f7e6cdc710d2c01447d7e1189512465cd51d99258b43f99d85cf8c38d: Status 404 returned error can't find the container with id 4a08218f7e6cdc710d2c01447d7e1189512465cd51d99258b43f99d85cf8c38d Nov 25 15:06:36 crc kubenswrapper[4806]: W1125 15:06:36.433299 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod672c5c0d_1d2d_4e3e_bccf_6f8fd25f98ae.slice/crio-c7065724a88e3fe2df083e8e8bc92a14c4df701302746406ee75bdd794218fce WatchSource:0}: Error finding container c7065724a88e3fe2df083e8e8bc92a14c4df701302746406ee75bdd794218fce: Status 404 returned error can't find the container with id c7065724a88e3fe2df083e8e8bc92a14c4df701302746406ee75bdd794218fce Nov 25 15:06:36 crc kubenswrapper[4806]: I1125 15:06:36.592156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-2nhx4"] Nov 25 15:06:37 crc kubenswrapper[4806]: I1125 15:06:37.160338 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" event={"ID":"9914c048-9845-4535-97d5-2833b53b84d3","Type":"ContainerStarted","Data":"4a08218f7e6cdc710d2c01447d7e1189512465cd51d99258b43f99d85cf8c38d"} Nov 25 15:06:37 crc kubenswrapper[4806]: I1125 15:06:37.161893 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" event={"ID":"95b3b0c2-b552-4f25-803e-f2ae9d53add8","Type":"ContainerStarted","Data":"f185cb3bc4798261faa983a13ae523589c3a578d0bbdfd7161fb8b293162b661"} Nov 25 15:06:37 crc kubenswrapper[4806]: I1125 15:06:37.162965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" event={"ID":"672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae","Type":"ContainerStarted","Data":"c7065724a88e3fe2df083e8e8bc92a14c4df701302746406ee75bdd794218fce"} Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.210690 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" event={"ID":"380a6ec0-8579-4cf8-bd81-52186962d2ed","Type":"ContainerStarted","Data":"2ca3fe9d5fbd640b10c56f8110eda5a63fca7d80feb462f338898c0068517f9f"} Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.212149 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" event={"ID":"672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae","Type":"ContainerStarted","Data":"bb3c7a04069ed138e2aa1ae409c0cd366ad8e38ae564246e5688657be15d1a8a"} Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.212302 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.214101 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" event={"ID":"9914c048-9845-4535-97d5-2833b53b84d3","Type":"ContainerStarted","Data":"b6366c328fb72ba81226ffaaa32d3deb490f80917c1b462b1ac898cc51c44b71"} Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.216987 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" event={"ID":"95b3b0c2-b552-4f25-803e-f2ae9d53add8","Type":"ContainerStarted","Data":"bdbdfaca6eed81ac9dce8cb120b37747934ca2b43695af56c882cbcdf9ee0b96"} Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.243675 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2s9fq" podStartSLOduration=19.505736865 podStartE2EDuration="53.243640274s" podCreationTimestamp="2025-11-25 15:05:49 +0000 UTC" firstStartedPulling="2025-11-25 15:06:07.557553293 +0000 UTC m=+800.209695704" lastFinishedPulling="2025-11-25 15:06:41.295456702 +0000 UTC m=+833.947599113" observedRunningTime="2025-11-25 15:06:42.233366675 +0000 UTC m=+834.885509086" watchObservedRunningTime="2025-11-25 15:06:42.243640274 +0000 UTC m=+834.895782685" Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.261060 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-mw4xn" podStartSLOduration=2.41776543 podStartE2EDuration="7.261028423s" podCreationTimestamp="2025-11-25 15:06:35 +0000 UTC" firstStartedPulling="2025-11-25 15:06:36.433540714 +0000 UTC m=+829.085683125" lastFinishedPulling="2025-11-25 15:06:41.276803707 +0000 UTC m=+833.928946118" observedRunningTime="2025-11-25 15:06:42.257579346 +0000 UTC m=+834.909721767" watchObservedRunningTime="2025-11-25 15:06:42.261028423 +0000 UTC m=+834.913170835" Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.288930 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" podStartSLOduration=2.601384139 podStartE2EDuration="7.288898358s" podCreationTimestamp="2025-11-25 15:06:35 +0000 UTC" firstStartedPulling="2025-11-25 15:06:36.607085549 +0000 UTC m=+829.259227960" lastFinishedPulling="2025-11-25 15:06:41.294599758 +0000 UTC m=+833.946742179" observedRunningTime="2025-11-25 15:06:42.285232705 +0000 UTC m=+834.937375116" watchObservedRunningTime="2025-11-25 15:06:42.288898358 +0000 UTC m=+834.941040769" Nov 25 15:06:42 crc kubenswrapper[4806]: I1125 15:06:42.328616 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" podStartSLOduration=2.453114835 podStartE2EDuration="7.328595496s" podCreationTimestamp="2025-11-25 15:06:35 +0000 UTC" firstStartedPulling="2025-11-25 15:06:36.435688054 +0000 UTC m=+829.087830465" lastFinishedPulling="2025-11-25 15:06:41.311168715 +0000 UTC m=+833.963311126" observedRunningTime="2025-11-25 15:06:42.323475621 +0000 UTC m=+834.975618052" watchObservedRunningTime="2025-11-25 15:06:42.328595496 +0000 UTC m=+834.980737907" Nov 25 15:06:48 crc kubenswrapper[4806]: I1125 15:06:48.935636 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:06:48 crc kubenswrapper[4806]: I1125 15:06:48.936470 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:06:50 crc kubenswrapper[4806]: I1125 15:06:50.953298 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-jssct" Nov 25 15:07:18 crc kubenswrapper[4806]: I1125 15:07:18.934965 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:07:18 crc kubenswrapper[4806]: I1125 15:07:18.935870 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:07:18 crc kubenswrapper[4806]: I1125 15:07:18.935949 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:07:18 crc kubenswrapper[4806]: I1125 15:07:18.936836 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:07:18 crc kubenswrapper[4806]: I1125 15:07:18.936912 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27" gracePeriod=600 Nov 25 15:07:19 crc kubenswrapper[4806]: I1125 15:07:19.477067 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27" exitCode=0 Nov 25 15:07:19 crc kubenswrapper[4806]: I1125 15:07:19.477141 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27"} Nov 25 15:07:19 crc kubenswrapper[4806]: I1125 15:07:19.477535 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5"} Nov 25 15:07:19 crc kubenswrapper[4806]: I1125 15:07:19.477558 4806 scope.go:117] "RemoveContainer" containerID="842f56c6e5e9f53ffe1d13b6e4c7354c36b5d058d4d84710d6bfcc9d586f8553" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.727929 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb"] Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.734079 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.739556 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb"] Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.739923 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.859238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.859337 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntf7\" (UniqueName: \"kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.859390 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.960781 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.960879 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xntf7\" (UniqueName: \"kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.960926 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.961447 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.961477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:22 crc kubenswrapper[4806]: I1125 15:07:22.984918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xntf7\" (UniqueName: \"kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7\") pod \"142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:23 crc kubenswrapper[4806]: I1125 15:07:23.056021 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:23 crc kubenswrapper[4806]: I1125 15:07:23.533815 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb"] Nov 25 15:07:23 crc kubenswrapper[4806]: W1125 15:07:23.550615 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda78134fd_b0fb_4f66_8d2c_a7e0d8cba9d2.slice/crio-362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849 WatchSource:0}: Error finding container 362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849: Status 404 returned error can't find the container with id 362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849 Nov 25 15:07:24 crc kubenswrapper[4806]: I1125 15:07:24.530598 4806 generic.go:334] "Generic (PLEG): container finished" podID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerID="9aaffd80582cef208b6c646e014452f53cfbbde90ab574ccb1871d1d2217769c" exitCode=0 Nov 25 15:07:24 crc kubenswrapper[4806]: I1125 15:07:24.530875 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" event={"ID":"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2","Type":"ContainerDied","Data":"9aaffd80582cef208b6c646e014452f53cfbbde90ab574ccb1871d1d2217769c"} Nov 25 15:07:24 crc kubenswrapper[4806]: I1125 15:07:24.531150 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" event={"ID":"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2","Type":"ContainerStarted","Data":"362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849"} Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.006504 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.008385 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.010430 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.010451 4806 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-s7h4x" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.011402 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.025024 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.209786 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccn2b\" (UniqueName: \"kubernetes.io/projected/b9d5707d-0270-4ec3-9e31-18b7ce617a3c-kube-api-access-ccn2b\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.209864 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.311622 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccn2b\" (UniqueName: \"kubernetes.io/projected/b9d5707d-0270-4ec3-9e31-18b7ce617a3c-kube-api-access-ccn2b\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.311990 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.316481 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.316523 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/37b0d54aff67e6a9e79c53576685477fdf6408d19c7814d7e9dc1e2d335b6cd4/globalmount\"" pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.336427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccn2b\" (UniqueName: \"kubernetes.io/projected/b9d5707d-0270-4ec3-9e31-18b7ce617a3c-kube-api-access-ccn2b\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.353525 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b52f35bc-1e19-4723-887d-52d507ef5a3d\") pod \"minio\" (UID: \"b9d5707d-0270-4ec3-9e31-18b7ce617a3c\") " pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.547487 4806 generic.go:334] "Generic (PLEG): container finished" podID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerID="19117bc5f03a4619685b4a8abb9064af48baf7a815cc33350730d7e22edb5e12" exitCode=0 Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.547545 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" event={"ID":"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2","Type":"ContainerDied","Data":"19117bc5f03a4619685b4a8abb9064af48baf7a815cc33350730d7e22edb5e12"} Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.627835 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 25 15:07:26 crc kubenswrapper[4806]: I1125 15:07:26.863719 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 25 15:07:27 crc kubenswrapper[4806]: I1125 15:07:27.559255 4806 generic.go:334] "Generic (PLEG): container finished" podID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerID="5502f9308be12f1165a73ffd50c34aa6f133bb4fac9c3f145031872ca3da2313" exitCode=0 Nov 25 15:07:27 crc kubenswrapper[4806]: I1125 15:07:27.559350 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" event={"ID":"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2","Type":"ContainerDied","Data":"5502f9308be12f1165a73ffd50c34aa6f133bb4fac9c3f145031872ca3da2313"} Nov 25 15:07:27 crc kubenswrapper[4806]: I1125 15:07:27.561506 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b9d5707d-0270-4ec3-9e31-18b7ce617a3c","Type":"ContainerStarted","Data":"c92e9e0bb6c725af4702895ae28f4a70dbe0b810ed3982775b44db940d56f1ba"} Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.835683 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.841328 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.849716 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.955964 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.956076 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5lht\" (UniqueName: \"kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:28 crc kubenswrapper[4806]: I1125 15:07:28.956138 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.057666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.057762 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5lht\" (UniqueName: \"kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.057826 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.058558 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.058618 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.093337 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5lht\" (UniqueName: \"kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht\") pod \"certified-operators-98xbk\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.174050 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:29 crc kubenswrapper[4806]: I1125 15:07:29.873764 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.072203 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xntf7\" (UniqueName: \"kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7\") pod \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.072402 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util\") pod \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.072442 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle\") pod \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\" (UID: \"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2\") " Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.073935 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle" (OuterVolumeSpecName: "bundle") pod "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" (UID: "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.085682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util" (OuterVolumeSpecName: "util") pod "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" (UID: "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.087065 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7" (OuterVolumeSpecName: "kube-api-access-xntf7") pod "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" (UID: "a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2"). InnerVolumeSpecName "kube-api-access-xntf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.174236 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xntf7\" (UniqueName: \"kubernetes.io/projected/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-kube-api-access-xntf7\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.174294 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.174308 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.578825 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.608741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" event={"ID":"a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2","Type":"ContainerDied","Data":"362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849"} Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.608795 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="362b2d3b362f48df7c3dfb596a8c9ecfb6c382d87a535e70ffa5f229bbc93849" Nov 25 15:07:30 crc kubenswrapper[4806]: I1125 15:07:30.608902 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb" Nov 25 15:07:31 crc kubenswrapper[4806]: I1125 15:07:31.615533 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerID="3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc" exitCode=0 Nov 25 15:07:31 crc kubenswrapper[4806]: I1125 15:07:31.615648 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerDied","Data":"3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc"} Nov 25 15:07:31 crc kubenswrapper[4806]: I1125 15:07:31.616202 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerStarted","Data":"a72e4583502405a735cbe1fb7b38380ee4cc0d0294a428a305caf71689cd7e53"} Nov 25 15:07:31 crc kubenswrapper[4806]: I1125 15:07:31.619364 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"b9d5707d-0270-4ec3-9e31-18b7ce617a3c","Type":"ContainerStarted","Data":"d4779489107bc3974bf303d238b28abd5b0ae29021aafd3c80e73841e67ed9bc"} Nov 25 15:07:31 crc kubenswrapper[4806]: I1125 15:07:31.655489 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.005003049 podStartE2EDuration="8.655461773s" podCreationTimestamp="2025-11-25 15:07:23 +0000 UTC" firstStartedPulling="2025-11-25 15:07:26.88110294 +0000 UTC m=+879.533245351" lastFinishedPulling="2025-11-25 15:07:30.531561654 +0000 UTC m=+883.183704075" observedRunningTime="2025-11-25 15:07:31.653723514 +0000 UTC m=+884.305865945" watchObservedRunningTime="2025-11-25 15:07:31.655461773 +0000 UTC m=+884.307604174" Nov 25 15:07:32 crc kubenswrapper[4806]: I1125 15:07:32.628540 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerID="5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275" exitCode=0 Nov 25 15:07:32 crc kubenswrapper[4806]: I1125 15:07:32.628598 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerDied","Data":"5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275"} Nov 25 15:07:33 crc kubenswrapper[4806]: I1125 15:07:33.638424 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerStarted","Data":"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2"} Nov 25 15:07:33 crc kubenswrapper[4806]: I1125 15:07:33.664137 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-98xbk" podStartSLOduration=4.22014473 podStartE2EDuration="5.664104949s" podCreationTimestamp="2025-11-25 15:07:28 +0000 UTC" firstStartedPulling="2025-11-25 15:07:31.61945255 +0000 UTC m=+884.271594961" lastFinishedPulling="2025-11-25 15:07:33.063412769 +0000 UTC m=+885.715555180" observedRunningTime="2025-11-25 15:07:33.655698822 +0000 UTC m=+886.307841233" watchObservedRunningTime="2025-11-25 15:07:33.664104949 +0000 UTC m=+886.316247380" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.908887 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p"] Nov 25 15:07:35 crc kubenswrapper[4806]: E1125 15:07:35.909610 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="pull" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.909627 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="pull" Nov 25 15:07:35 crc kubenswrapper[4806]: E1125 15:07:35.909647 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="util" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.909653 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="util" Nov 25 15:07:35 crc kubenswrapper[4806]: E1125 15:07:35.909663 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="extract" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.909670 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="extract" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.909790 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2" containerName="extract" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.910744 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.915684 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:07:35 crc kubenswrapper[4806]: I1125 15:07:35.921088 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p"] Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.075021 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95jn6\" (UniqueName: \"kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.075125 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.075187 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.177171 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95jn6\" (UniqueName: \"kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.177271 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.177299 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.178046 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.178133 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.204486 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95jn6\" (UniqueName: \"kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6\") pod \"03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.230745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.737788 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p"] Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.969813 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn"] Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.971422 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.977741 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-ckwfj" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.978097 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.978234 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.978564 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.978598 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 25 15:07:36 crc kubenswrapper[4806]: I1125 15:07:36.978758 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.009168 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn"] Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.092965 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2942b82c-e706-4f3e-ad7d-cef384dbcfba-manager-config\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.093018 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-apiservice-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.093051 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qk64\" (UniqueName: \"kubernetes.io/projected/2942b82c-e706-4f3e-ad7d-cef384dbcfba-kube-api-access-6qk64\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.093151 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-webhook-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.093171 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.195367 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2942b82c-e706-4f3e-ad7d-cef384dbcfba-manager-config\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.195894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-apiservice-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.195919 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qk64\" (UniqueName: \"kubernetes.io/projected/2942b82c-e706-4f3e-ad7d-cef384dbcfba-kube-api-access-6qk64\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.195971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-webhook-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.195998 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.197394 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/2942b82c-e706-4f3e-ad7d-cef384dbcfba-manager-config\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.204840 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-webhook-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.205382 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-apiservice-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.211230 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2942b82c-e706-4f3e-ad7d-cef384dbcfba-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.224410 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qk64\" (UniqueName: \"kubernetes.io/projected/2942b82c-e706-4f3e-ad7d-cef384dbcfba-kube-api-access-6qk64\") pod \"loki-operator-controller-manager-8b74fc76b-wflwn\" (UID: \"2942b82c-e706-4f3e-ad7d-cef384dbcfba\") " pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.295561 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.577762 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn"] Nov 25 15:07:37 crc kubenswrapper[4806]: W1125 15:07:37.584750 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2942b82c_e706_4f3e_ad7d_cef384dbcfba.slice/crio-4d77bf48cdb486f1fd9d48d73161570354a9eb893bc2821a3ff7f51ffa852172 WatchSource:0}: Error finding container 4d77bf48cdb486f1fd9d48d73161570354a9eb893bc2821a3ff7f51ffa852172: Status 404 returned error can't find the container with id 4d77bf48cdb486f1fd9d48d73161570354a9eb893bc2821a3ff7f51ffa852172 Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.667892 4806 generic.go:334] "Generic (PLEG): container finished" podID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerID="87a889542720afe0f36d52b5746dc4494c5695b57886dc0a1f0d21af48290d03" exitCode=0 Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.667953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerDied","Data":"87a889542720afe0f36d52b5746dc4494c5695b57886dc0a1f0d21af48290d03"} Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.668014 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerStarted","Data":"392a58c382a0f69bdfea4c986049dabbe90218e78932b8a76270ed99b6e5b782"} Nov 25 15:07:37 crc kubenswrapper[4806]: I1125 15:07:37.670912 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" event={"ID":"2942b82c-e706-4f3e-ad7d-cef384dbcfba","Type":"ContainerStarted","Data":"4d77bf48cdb486f1fd9d48d73161570354a9eb893bc2821a3ff7f51ffa852172"} Nov 25 15:07:39 crc kubenswrapper[4806]: I1125 15:07:39.178201 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:39 crc kubenswrapper[4806]: I1125 15:07:39.178845 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:39 crc kubenswrapper[4806]: I1125 15:07:39.227559 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:39 crc kubenswrapper[4806]: I1125 15:07:39.735817 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:40 crc kubenswrapper[4806]: I1125 15:07:40.697953 4806 generic.go:334] "Generic (PLEG): container finished" podID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerID="472997443ed2fd8a1c8d4a24cb447c3292bdf3d5168b1facaf3768b84d1a314a" exitCode=0 Nov 25 15:07:40 crc kubenswrapper[4806]: I1125 15:07:40.698063 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerDied","Data":"472997443ed2fd8a1c8d4a24cb447c3292bdf3d5168b1facaf3768b84d1a314a"} Nov 25 15:07:41 crc kubenswrapper[4806]: I1125 15:07:41.619795 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:41 crc kubenswrapper[4806]: I1125 15:07:41.708209 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-98xbk" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="registry-server" containerID="cri-o://58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2" gracePeriod=2 Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.493003 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.690668 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5lht\" (UniqueName: \"kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht\") pod \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.690785 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content\") pod \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.690834 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities\") pod \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\" (UID: \"5e7bcbea-1eb0-4658-a091-5b6eb3c85814\") " Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.692236 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities" (OuterVolumeSpecName: "utilities") pod "5e7bcbea-1eb0-4658-a091-5b6eb3c85814" (UID: "5e7bcbea-1eb0-4658-a091-5b6eb3c85814"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.696162 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht" (OuterVolumeSpecName: "kube-api-access-v5lht") pod "5e7bcbea-1eb0-4658-a091-5b6eb3c85814" (UID: "5e7bcbea-1eb0-4658-a091-5b6eb3c85814"). InnerVolumeSpecName "kube-api-access-v5lht". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.723896 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerID="58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2" exitCode=0 Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.723983 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-98xbk" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.724025 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerDied","Data":"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2"} Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.724063 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-98xbk" event={"ID":"5e7bcbea-1eb0-4658-a091-5b6eb3c85814","Type":"ContainerDied","Data":"a72e4583502405a735cbe1fb7b38380ee4cc0d0294a428a305caf71689cd7e53"} Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.724087 4806 scope.go:117] "RemoveContainer" containerID="58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.728097 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" event={"ID":"2942b82c-e706-4f3e-ad7d-cef384dbcfba","Type":"ContainerStarted","Data":"4c1fe9b300a2e9b48e618190aa65a845fa8989130facea3c5502c99b1f61ddbc"} Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.731017 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerStarted","Data":"65deb75123d0854a5fb55d675f2e17902d5c9b307eaff2d263b9703249037f05"} Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.741721 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e7bcbea-1eb0-4658-a091-5b6eb3c85814" (UID: "5e7bcbea-1eb0-4658-a091-5b6eb3c85814"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.747935 4806 scope.go:117] "RemoveContainer" containerID="5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.756627 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" podStartSLOduration=5.897467247 podStartE2EDuration="7.756592943s" podCreationTimestamp="2025-11-25 15:07:35 +0000 UTC" firstStartedPulling="2025-11-25 15:07:37.669541647 +0000 UTC m=+890.321684058" lastFinishedPulling="2025-11-25 15:07:39.528667343 +0000 UTC m=+892.180809754" observedRunningTime="2025-11-25 15:07:42.756206172 +0000 UTC m=+895.408348573" watchObservedRunningTime="2025-11-25 15:07:42.756592943 +0000 UTC m=+895.408735354" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.784636 4806 scope.go:117] "RemoveContainer" containerID="3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.792307 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.792523 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.792594 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5lht\" (UniqueName: \"kubernetes.io/projected/5e7bcbea-1eb0-4658-a091-5b6eb3c85814-kube-api-access-v5lht\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.811216 4806 scope.go:117] "RemoveContainer" containerID="58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2" Nov 25 15:07:42 crc kubenswrapper[4806]: E1125 15:07:42.811966 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2\": container with ID starting with 58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2 not found: ID does not exist" containerID="58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.812004 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2"} err="failed to get container status \"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2\": rpc error: code = NotFound desc = could not find container \"58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2\": container with ID starting with 58054c6f09107d71ff74e37e1b78c417fce30fa9ae8956dc8c565f776bba57b2 not found: ID does not exist" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.812035 4806 scope.go:117] "RemoveContainer" containerID="5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275" Nov 25 15:07:42 crc kubenswrapper[4806]: E1125 15:07:42.812941 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275\": container with ID starting with 5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275 not found: ID does not exist" containerID="5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.812971 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275"} err="failed to get container status \"5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275\": rpc error: code = NotFound desc = could not find container \"5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275\": container with ID starting with 5636634d45118ea5e267be7f2f8f7072e4ad99b132194693da517e165064d275 not found: ID does not exist" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.812997 4806 scope.go:117] "RemoveContainer" containerID="3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc" Nov 25 15:07:42 crc kubenswrapper[4806]: E1125 15:07:42.813445 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc\": container with ID starting with 3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc not found: ID does not exist" containerID="3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc" Nov 25 15:07:42 crc kubenswrapper[4806]: I1125 15:07:42.813470 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc"} err="failed to get container status \"3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc\": rpc error: code = NotFound desc = could not find container \"3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc\": container with ID starting with 3646ac16fbf9f230d5b182661d3cee751d7ca4c708ec7a9ae98d356e2fa697fc not found: ID does not exist" Nov 25 15:07:43 crc kubenswrapper[4806]: I1125 15:07:43.067761 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:43 crc kubenswrapper[4806]: I1125 15:07:43.073340 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-98xbk"] Nov 25 15:07:43 crc kubenswrapper[4806]: I1125 15:07:43.741915 4806 generic.go:334] "Generic (PLEG): container finished" podID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerID="65deb75123d0854a5fb55d675f2e17902d5c9b307eaff2d263b9703249037f05" exitCode=0 Nov 25 15:07:43 crc kubenswrapper[4806]: I1125 15:07:43.741991 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerDied","Data":"65deb75123d0854a5fb55d675f2e17902d5c9b307eaff2d263b9703249037f05"} Nov 25 15:07:44 crc kubenswrapper[4806]: I1125 15:07:44.097726 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" path="/var/lib/kubelet/pods/5e7bcbea-1eb0-4658-a091-5b6eb3c85814/volumes" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.022653 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.127309 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util\") pod \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.127415 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95jn6\" (UniqueName: \"kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6\") pod \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.127517 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle\") pod \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\" (UID: \"93f1ff8c-0309-4dc7-b711-20157db2f5f3\") " Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.134217 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle" (OuterVolumeSpecName: "bundle") pod "93f1ff8c-0309-4dc7-b711-20157db2f5f3" (UID: "93f1ff8c-0309-4dc7-b711-20157db2f5f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.137750 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6" (OuterVolumeSpecName: "kube-api-access-95jn6") pod "93f1ff8c-0309-4dc7-b711-20157db2f5f3" (UID: "93f1ff8c-0309-4dc7-b711-20157db2f5f3"). InnerVolumeSpecName "kube-api-access-95jn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.146876 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util" (OuterVolumeSpecName: "util") pod "93f1ff8c-0309-4dc7-b711-20157db2f5f3" (UID: "93f1ff8c-0309-4dc7-b711-20157db2f5f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.229353 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.229405 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95jn6\" (UniqueName: \"kubernetes.io/projected/93f1ff8c-0309-4dc7-b711-20157db2f5f3-kube-api-access-95jn6\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.229424 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/93f1ff8c-0309-4dc7-b711-20157db2f5f3-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.765526 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" event={"ID":"93f1ff8c-0309-4dc7-b711-20157db2f5f3","Type":"ContainerDied","Data":"392a58c382a0f69bdfea4c986049dabbe90218e78932b8a76270ed99b6e5b782"} Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.765592 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="392a58c382a0f69bdfea4c986049dabbe90218e78932b8a76270ed99b6e5b782" Nov 25 15:07:45 crc kubenswrapper[4806]: I1125 15:07:45.765714 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.026946 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027293 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="extract-utilities" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027310 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="extract-utilities" Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027348 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="util" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027356 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="util" Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027376 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="extract-content" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027385 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="extract-content" Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027398 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="extract" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027406 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="extract" Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027420 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="pull" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027430 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="pull" Nov 25 15:07:47 crc kubenswrapper[4806]: E1125 15:07:47.027443 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="registry-server" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027451 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="registry-server" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027584 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f1ff8c-0309-4dc7-b711-20157db2f5f3" containerName="extract" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.027610 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e7bcbea-1eb0-4658-a091-5b6eb3c85814" containerName="registry-server" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.029006 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.054296 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.161702 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.161774 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8pgt\" (UniqueName: \"kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.162012 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.263341 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.263461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.263480 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8pgt\" (UniqueName: \"kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.264142 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.264230 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.291440 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8pgt\" (UniqueName: \"kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt\") pod \"redhat-marketplace-2vg5j\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:47 crc kubenswrapper[4806]: I1125 15:07:47.360487 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.255232 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:07:49 crc kubenswrapper[4806]: W1125 15:07:49.271281 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod837f1056_8b30_4619_ba6c_d2654657ce9e.slice/crio-72e58783a1d78163e3433b4730457dd53d7e8524d77f6f55c578e428ac88066d WatchSource:0}: Error finding container 72e58783a1d78163e3433b4730457dd53d7e8524d77f6f55c578e428ac88066d: Status 404 returned error can't find the container with id 72e58783a1d78163e3433b4730457dd53d7e8524d77f6f55c578e428ac88066d Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.794236 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" event={"ID":"2942b82c-e706-4f3e-ad7d-cef384dbcfba","Type":"ContainerStarted","Data":"311ad084f7b5d7a6bfd04acf6dd2898909c74c934eb0d6f19274bf90bb1f798f"} Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.795350 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.797822 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.797924 4806 generic.go:334] "Generic (PLEG): container finished" podID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerID="fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176" exitCode=0 Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.797963 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerDied","Data":"fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176"} Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.797986 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerStarted","Data":"72e58783a1d78163e3433b4730457dd53d7e8524d77f6f55c578e428ac88066d"} Nov 25 15:07:49 crc kubenswrapper[4806]: I1125 15:07:49.833217 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" podStartSLOduration=2.237590879 podStartE2EDuration="13.833186847s" podCreationTimestamp="2025-11-25 15:07:36 +0000 UTC" firstStartedPulling="2025-11-25 15:07:37.587379184 +0000 UTC m=+890.239521595" lastFinishedPulling="2025-11-25 15:07:49.182975152 +0000 UTC m=+901.835117563" observedRunningTime="2025-11-25 15:07:49.829973046 +0000 UTC m=+902.482115487" watchObservedRunningTime="2025-11-25 15:07:49.833186847 +0000 UTC m=+902.485329268" Nov 25 15:07:53 crc kubenswrapper[4806]: I1125 15:07:53.829080 4806 generic.go:334] "Generic (PLEG): container finished" podID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerID="e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac" exitCode=0 Nov 25 15:07:53 crc kubenswrapper[4806]: I1125 15:07:53.829229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerDied","Data":"e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac"} Nov 25 15:07:54 crc kubenswrapper[4806]: I1125 15:07:54.844705 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerStarted","Data":"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737"} Nov 25 15:07:57 crc kubenswrapper[4806]: I1125 15:07:57.360735 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:57 crc kubenswrapper[4806]: I1125 15:07:57.361112 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:57 crc kubenswrapper[4806]: I1125 15:07:57.409764 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:07:57 crc kubenswrapper[4806]: I1125 15:07:57.430824 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2vg5j" podStartSLOduration=6.002558118 podStartE2EDuration="10.430796866s" podCreationTimestamp="2025-11-25 15:07:47 +0000 UTC" firstStartedPulling="2025-11-25 15:07:49.800928609 +0000 UTC m=+902.453071050" lastFinishedPulling="2025-11-25 15:07:54.229167387 +0000 UTC m=+906.881309798" observedRunningTime="2025-11-25 15:07:54.880532144 +0000 UTC m=+907.532674595" watchObservedRunningTime="2025-11-25 15:07:57.430796866 +0000 UTC m=+910.082939297" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.229072 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.231149 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.246113 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.358064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.358515 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56zcc\" (UniqueName: \"kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.358538 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.460330 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.460431 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56zcc\" (UniqueName: \"kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.460465 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.461148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.461222 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.486003 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56zcc\" (UniqueName: \"kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc\") pod \"community-operators-8ktgj\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.547704 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.831068 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:00 crc kubenswrapper[4806]: I1125 15:08:00.887396 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerStarted","Data":"0ef04ca49aca5d66b19fe3bb3022616881e5717cae43aeaf069aee75bd50e33a"} Nov 25 15:08:01 crc kubenswrapper[4806]: I1125 15:08:01.897560 4806 generic.go:334] "Generic (PLEG): container finished" podID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerID="3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436" exitCode=0 Nov 25 15:08:01 crc kubenswrapper[4806]: I1125 15:08:01.897756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerDied","Data":"3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436"} Nov 25 15:08:02 crc kubenswrapper[4806]: I1125 15:08:02.908616 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerStarted","Data":"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7"} Nov 25 15:08:03 crc kubenswrapper[4806]: I1125 15:08:03.919112 4806 generic.go:334] "Generic (PLEG): container finished" podID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerID="8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7" exitCode=0 Nov 25 15:08:03 crc kubenswrapper[4806]: I1125 15:08:03.919237 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerDied","Data":"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7"} Nov 25 15:08:04 crc kubenswrapper[4806]: I1125 15:08:04.931824 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerStarted","Data":"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb"} Nov 25 15:08:04 crc kubenswrapper[4806]: I1125 15:08:04.961479 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8ktgj" podStartSLOduration=2.495006962 podStartE2EDuration="4.961457828s" podCreationTimestamp="2025-11-25 15:08:00 +0000 UTC" firstStartedPulling="2025-11-25 15:08:01.9009335 +0000 UTC m=+914.553075911" lastFinishedPulling="2025-11-25 15:08:04.367384366 +0000 UTC m=+917.019526777" observedRunningTime="2025-11-25 15:08:04.957445816 +0000 UTC m=+917.609588247" watchObservedRunningTime="2025-11-25 15:08:04.961457828 +0000 UTC m=+917.613600239" Nov 25 15:08:07 crc kubenswrapper[4806]: I1125 15:08:07.406286 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:08:07 crc kubenswrapper[4806]: I1125 15:08:07.458368 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:08:07 crc kubenswrapper[4806]: I1125 15:08:07.950539 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2vg5j" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="registry-server" containerID="cri-o://6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737" gracePeriod=2 Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.815798 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.906358 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8pgt\" (UniqueName: \"kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt\") pod \"837f1056-8b30-4619-ba6c-d2654657ce9e\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.906506 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content\") pod \"837f1056-8b30-4619-ba6c-d2654657ce9e\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.906675 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities\") pod \"837f1056-8b30-4619-ba6c-d2654657ce9e\" (UID: \"837f1056-8b30-4619-ba6c-d2654657ce9e\") " Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.907877 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities" (OuterVolumeSpecName: "utilities") pod "837f1056-8b30-4619-ba6c-d2654657ce9e" (UID: "837f1056-8b30-4619-ba6c-d2654657ce9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.914153 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt" (OuterVolumeSpecName: "kube-api-access-t8pgt") pod "837f1056-8b30-4619-ba6c-d2654657ce9e" (UID: "837f1056-8b30-4619-ba6c-d2654657ce9e"). InnerVolumeSpecName "kube-api-access-t8pgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.925904 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "837f1056-8b30-4619-ba6c-d2654657ce9e" (UID: "837f1056-8b30-4619-ba6c-d2654657ce9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.957973 4806 generic.go:334] "Generic (PLEG): container finished" podID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerID="6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737" exitCode=0 Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.958048 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vg5j" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.958033 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerDied","Data":"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737"} Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.958731 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vg5j" event={"ID":"837f1056-8b30-4619-ba6c-d2654657ce9e","Type":"ContainerDied","Data":"72e58783a1d78163e3433b4730457dd53d7e8524d77f6f55c578e428ac88066d"} Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.958826 4806 scope.go:117] "RemoveContainer" containerID="6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.977703 4806 scope.go:117] "RemoveContainer" containerID="e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac" Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.991809 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:08:08 crc kubenswrapper[4806]: I1125 15:08:08.995767 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vg5j"] Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.008231 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.008286 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8pgt\" (UniqueName: \"kubernetes.io/projected/837f1056-8b30-4619-ba6c-d2654657ce9e-kube-api-access-t8pgt\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.008301 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/837f1056-8b30-4619-ba6c-d2654657ce9e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.013013 4806 scope.go:117] "RemoveContainer" containerID="fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.028030 4806 scope.go:117] "RemoveContainer" containerID="6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737" Nov 25 15:08:09 crc kubenswrapper[4806]: E1125 15:08:09.028545 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737\": container with ID starting with 6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737 not found: ID does not exist" containerID="6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.028662 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737"} err="failed to get container status \"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737\": rpc error: code = NotFound desc = could not find container \"6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737\": container with ID starting with 6bae735aefe323e0e0f254e6f20b88360999bb1491c13348051c6bc6c2d35737 not found: ID does not exist" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.028690 4806 scope.go:117] "RemoveContainer" containerID="e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac" Nov 25 15:08:09 crc kubenswrapper[4806]: E1125 15:08:09.028935 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac\": container with ID starting with e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac not found: ID does not exist" containerID="e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.028962 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac"} err="failed to get container status \"e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac\": rpc error: code = NotFound desc = could not find container \"e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac\": container with ID starting with e1b09e2ea1b46b12bea6c03fb999300e5e3ffae81f3dde901e747a9f30ff86ac not found: ID does not exist" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.028977 4806 scope.go:117] "RemoveContainer" containerID="fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176" Nov 25 15:08:09 crc kubenswrapper[4806]: E1125 15:08:09.030030 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176\": container with ID starting with fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176 not found: ID does not exist" containerID="fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176" Nov 25 15:08:09 crc kubenswrapper[4806]: I1125 15:08:09.030120 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176"} err="failed to get container status \"fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176\": rpc error: code = NotFound desc = could not find container \"fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176\": container with ID starting with fed22516cc8621ccae0edd10d69deb07bb69856c3b6a558bfa0f4262e2f28176 not found: ID does not exist" Nov 25 15:08:10 crc kubenswrapper[4806]: I1125 15:08:10.096169 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" path="/var/lib/kubelet/pods/837f1056-8b30-4619-ba6c-d2654657ce9e/volumes" Nov 25 15:08:10 crc kubenswrapper[4806]: I1125 15:08:10.548044 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:10 crc kubenswrapper[4806]: I1125 15:08:10.548152 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:10 crc kubenswrapper[4806]: I1125 15:08:10.594003 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:11 crc kubenswrapper[4806]: I1125 15:08:11.019755 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:11 crc kubenswrapper[4806]: I1125 15:08:11.823286 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.682069 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb"] Nov 25 15:08:12 crc kubenswrapper[4806]: E1125 15:08:12.682918 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="extract-content" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.682935 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="extract-content" Nov 25 15:08:12 crc kubenswrapper[4806]: E1125 15:08:12.682956 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="extract-utilities" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.682963 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="extract-utilities" Nov 25 15:08:12 crc kubenswrapper[4806]: E1125 15:08:12.682979 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="registry-server" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.682986 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="registry-server" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.683147 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="837f1056-8b30-4619-ba6c-d2654657ce9e" containerName="registry-server" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.684366 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.686639 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.691798 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb"] Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.767519 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.768127 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.768262 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.869150 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.869215 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.869261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.869924 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.870047 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.903398 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:12 crc kubenswrapper[4806]: I1125 15:08:12.986374 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8ktgj" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="registry-server" containerID="cri-o://20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb" gracePeriod=2 Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.065074 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.318098 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb"] Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.814888 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.884277 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56zcc\" (UniqueName: \"kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc\") pod \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.884414 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content\") pod \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.884475 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities\") pod \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\" (UID: \"0faa6223-1c84-482f-a1e1-ed802ed73e1f\") " Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.886448 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities" (OuterVolumeSpecName: "utilities") pod "0faa6223-1c84-482f-a1e1-ed802ed73e1f" (UID: "0faa6223-1c84-482f-a1e1-ed802ed73e1f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.887143 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.891641 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc" (OuterVolumeSpecName: "kube-api-access-56zcc") pod "0faa6223-1c84-482f-a1e1-ed802ed73e1f" (UID: "0faa6223-1c84-482f-a1e1-ed802ed73e1f"). InnerVolumeSpecName "kube-api-access-56zcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.935155 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0faa6223-1c84-482f-a1e1-ed802ed73e1f" (UID: "0faa6223-1c84-482f-a1e1-ed802ed73e1f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.988579 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56zcc\" (UniqueName: \"kubernetes.io/projected/0faa6223-1c84-482f-a1e1-ed802ed73e1f-kube-api-access-56zcc\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.988619 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0faa6223-1c84-482f-a1e1-ed802ed73e1f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.994080 4806 generic.go:334] "Generic (PLEG): container finished" podID="1085d309-de3f-424f-b793-c89655f9fb2d" containerID="61193c6d3863cea0986b0464fe7c721a0080b4d25e670f0569d1052b06b9701b" exitCode=0 Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.994144 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" event={"ID":"1085d309-de3f-424f-b793-c89655f9fb2d","Type":"ContainerDied","Data":"61193c6d3863cea0986b0464fe7c721a0080b4d25e670f0569d1052b06b9701b"} Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.994259 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" event={"ID":"1085d309-de3f-424f-b793-c89655f9fb2d","Type":"ContainerStarted","Data":"4ccef4c7a36ba5fdd19d41cfaa6150f5bdb338b4ae3ffa89977047fb75f71344"} Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.996979 4806 generic.go:334] "Generic (PLEG): container finished" podID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerID="20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb" exitCode=0 Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.997022 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8ktgj" Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.997021 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerDied","Data":"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb"} Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.997133 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8ktgj" event={"ID":"0faa6223-1c84-482f-a1e1-ed802ed73e1f","Type":"ContainerDied","Data":"0ef04ca49aca5d66b19fe3bb3022616881e5717cae43aeaf069aee75bd50e33a"} Nov 25 15:08:13 crc kubenswrapper[4806]: I1125 15:08:13.997164 4806 scope.go:117] "RemoveContainer" containerID="20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.031920 4806 scope.go:117] "RemoveContainer" containerID="8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.032677 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.037854 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8ktgj"] Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.053585 4806 scope.go:117] "RemoveContainer" containerID="3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.071257 4806 scope.go:117] "RemoveContainer" containerID="20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb" Nov 25 15:08:14 crc kubenswrapper[4806]: E1125 15:08:14.073166 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb\": container with ID starting with 20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb not found: ID does not exist" containerID="20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.073270 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb"} err="failed to get container status \"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb\": rpc error: code = NotFound desc = could not find container \"20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb\": container with ID starting with 20f996bd907b0261d2bee07225d6e7bb4843e4cb5c32ba61d02b1ac68a7096eb not found: ID does not exist" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.073336 4806 scope.go:117] "RemoveContainer" containerID="8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7" Nov 25 15:08:14 crc kubenswrapper[4806]: E1125 15:08:14.073950 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7\": container with ID starting with 8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7 not found: ID does not exist" containerID="8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.073973 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7"} err="failed to get container status \"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7\": rpc error: code = NotFound desc = could not find container \"8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7\": container with ID starting with 8f85d47676836ac9cef0accb6938f661771d4ede0e29b45aed5f33720e2ccec7 not found: ID does not exist" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.073989 4806 scope.go:117] "RemoveContainer" containerID="3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436" Nov 25 15:08:14 crc kubenswrapper[4806]: E1125 15:08:14.074660 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436\": container with ID starting with 3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436 not found: ID does not exist" containerID="3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.074680 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436"} err="failed to get container status \"3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436\": rpc error: code = NotFound desc = could not find container \"3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436\": container with ID starting with 3229a62918812322fc0cd9ddf002268de69cb7a348a44af8156b51e5cc9ed436 not found: ID does not exist" Nov 25 15:08:14 crc kubenswrapper[4806]: I1125 15:08:14.098543 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" path="/var/lib/kubelet/pods/0faa6223-1c84-482f-a1e1-ed802ed73e1f/volumes" Nov 25 15:08:16 crc kubenswrapper[4806]: I1125 15:08:16.015675 4806 generic.go:334] "Generic (PLEG): container finished" podID="1085d309-de3f-424f-b793-c89655f9fb2d" containerID="3cdf3c1107618fe7e4a549874926820125db7790c20abf87d4488ef07371c7ba" exitCode=0 Nov 25 15:08:16 crc kubenswrapper[4806]: I1125 15:08:16.015775 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" event={"ID":"1085d309-de3f-424f-b793-c89655f9fb2d","Type":"ContainerDied","Data":"3cdf3c1107618fe7e4a549874926820125db7790c20abf87d4488ef07371c7ba"} Nov 25 15:08:17 crc kubenswrapper[4806]: I1125 15:08:17.026490 4806 generic.go:334] "Generic (PLEG): container finished" podID="1085d309-de3f-424f-b793-c89655f9fb2d" containerID="c17357c82ef7b8b906c0ee950968984bb42d3f7434971bba83cd45615e5aae16" exitCode=0 Nov 25 15:08:17 crc kubenswrapper[4806]: I1125 15:08:17.026586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" event={"ID":"1085d309-de3f-424f-b793-c89655f9fb2d","Type":"ContainerDied","Data":"c17357c82ef7b8b906c0ee950968984bb42d3f7434971bba83cd45615e5aae16"} Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.322291 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.465022 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util\") pod \"1085d309-de3f-424f-b793-c89655f9fb2d\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.465562 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle\") pod \"1085d309-de3f-424f-b793-c89655f9fb2d\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.465682 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs\") pod \"1085d309-de3f-424f-b793-c89655f9fb2d\" (UID: \"1085d309-de3f-424f-b793-c89655f9fb2d\") " Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.472248 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle" (OuterVolumeSpecName: "bundle") pod "1085d309-de3f-424f-b793-c89655f9fb2d" (UID: "1085d309-de3f-424f-b793-c89655f9fb2d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.477363 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs" (OuterVolumeSpecName: "kube-api-access-wcfbs") pod "1085d309-de3f-424f-b793-c89655f9fb2d" (UID: "1085d309-de3f-424f-b793-c89655f9fb2d"). InnerVolumeSpecName "kube-api-access-wcfbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.484842 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util" (OuterVolumeSpecName: "util") pod "1085d309-de3f-424f-b793-c89655f9fb2d" (UID: "1085d309-de3f-424f-b793-c89655f9fb2d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.567186 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.567223 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1085d309-de3f-424f-b793-c89655f9fb2d-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:19 crc kubenswrapper[4806]: I1125 15:08:19.567234 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcfbs\" (UniqueName: \"kubernetes.io/projected/1085d309-de3f-424f-b793-c89655f9fb2d-kube-api-access-wcfbs\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:20 crc kubenswrapper[4806]: I1125 15:08:20.050535 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" event={"ID":"1085d309-de3f-424f-b793-c89655f9fb2d","Type":"ContainerDied","Data":"4ccef4c7a36ba5fdd19d41cfaa6150f5bdb338b4ae3ffa89977047fb75f71344"} Nov 25 15:08:20 crc kubenswrapper[4806]: I1125 15:08:20.050588 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccef4c7a36ba5fdd19d41cfaa6150f5bdb338b4ae3ffa89977047fb75f71344" Nov 25 15:08:20 crc kubenswrapper[4806]: I1125 15:08:20.050678 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.259855 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b2jcn"] Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260664 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="extract" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260679 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="extract" Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260692 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="util" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260698 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="util" Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260717 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="registry-server" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260724 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="registry-server" Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260737 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="extract-utilities" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260743 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="extract-utilities" Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260751 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="pull" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260757 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="pull" Nov 25 15:08:22 crc kubenswrapper[4806]: E1125 15:08:22.260769 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="extract-content" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260775 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="extract-content" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260901 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0faa6223-1c84-482f-a1e1-ed802ed73e1f" containerName="registry-server" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.260930 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1085d309-de3f-424f-b793-c89655f9fb2d" containerName="extract" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.261611 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.264363 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.264676 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-662zk" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.264693 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 15:08:22 crc kubenswrapper[4806]: I1125 15:08:22.274426 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b2jcn"] Nov 25 15:08:23 crc kubenswrapper[4806]: I1125 15:08:23.063230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57grt\" (UniqueName: \"kubernetes.io/projected/63efa58c-1fdc-46b7-ba63-94effc1543d0-kube-api-access-57grt\") pod \"nmstate-operator-557fdffb88-b2jcn\" (UID: \"63efa58c-1fdc-46b7-ba63-94effc1543d0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" Nov 25 15:08:23 crc kubenswrapper[4806]: I1125 15:08:23.165583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57grt\" (UniqueName: \"kubernetes.io/projected/63efa58c-1fdc-46b7-ba63-94effc1543d0-kube-api-access-57grt\") pod \"nmstate-operator-557fdffb88-b2jcn\" (UID: \"63efa58c-1fdc-46b7-ba63-94effc1543d0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" Nov 25 15:08:23 crc kubenswrapper[4806]: I1125 15:08:23.201940 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57grt\" (UniqueName: \"kubernetes.io/projected/63efa58c-1fdc-46b7-ba63-94effc1543d0-kube-api-access-57grt\") pod \"nmstate-operator-557fdffb88-b2jcn\" (UID: \"63efa58c-1fdc-46b7-ba63-94effc1543d0\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" Nov 25 15:08:23 crc kubenswrapper[4806]: I1125 15:08:23.482110 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" Nov 25 15:08:23 crc kubenswrapper[4806]: I1125 15:08:23.741906 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-b2jcn"] Nov 25 15:08:24 crc kubenswrapper[4806]: I1125 15:08:24.077796 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" event={"ID":"63efa58c-1fdc-46b7-ba63-94effc1543d0","Type":"ContainerStarted","Data":"010458ec4fb4b75d2ff0f1787fceb77c0b6e1e6cfe6e4cdaedc4555232846d96"} Nov 25 15:08:27 crc kubenswrapper[4806]: I1125 15:08:27.108133 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" event={"ID":"63efa58c-1fdc-46b7-ba63-94effc1543d0","Type":"ContainerStarted","Data":"90b8a5cf32c7cf2e140e2be17287b808dffa9fa865e9b808917b63220eb52e3e"} Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.188789 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-b2jcn" podStartSLOduration=3.967289151 podStartE2EDuration="6.188760746s" podCreationTimestamp="2025-11-25 15:08:22 +0000 UTC" firstStartedPulling="2025-11-25 15:08:23.751756532 +0000 UTC m=+936.403898943" lastFinishedPulling="2025-11-25 15:08:25.973228127 +0000 UTC m=+938.625370538" observedRunningTime="2025-11-25 15:08:27.129168258 +0000 UTC m=+939.781310679" watchObservedRunningTime="2025-11-25 15:08:28.188760746 +0000 UTC m=+940.840903157" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.189726 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.190969 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.193287 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q4nb7" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.202347 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.203578 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.208805 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.246651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.246730 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cddbv\" (UniqueName: \"kubernetes.io/projected/58a03ccb-63cd-45fe-bc04-71fcc12c3434-kube-api-access-cddbv\") pod \"nmstate-metrics-5dcf9c57c5-b4tpl\" (UID: \"58a03ccb-63cd-45fe-bc04-71fcc12c3434\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.246822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzm8f\" (UniqueName: \"kubernetes.io/projected/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-kube-api-access-kzm8f\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.249394 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-8n9rx"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.250485 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.253603 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.268887 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.348767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: E1125 15:08:28.348945 4806 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 25 15:08:28 crc kubenswrapper[4806]: E1125 15:08:28.349154 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair podName:831b49c5-f5fa-4186-8bd0-25b5a3e76a45 nodeName:}" failed. No retries permitted until 2025-11-25 15:08:28.849098023 +0000 UTC m=+941.501240434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair") pod "nmstate-webhook-6b89b748d8-n8ld5" (UID: "831b49c5-f5fa-4186-8bd0-25b5a3e76a45") : secret "openshift-nmstate-webhook" not found Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.349754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cddbv\" (UniqueName: \"kubernetes.io/projected/58a03ccb-63cd-45fe-bc04-71fcc12c3434-kube-api-access-cddbv\") pod \"nmstate-metrics-5dcf9c57c5-b4tpl\" (UID: \"58a03ccb-63cd-45fe-bc04-71fcc12c3434\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.349825 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzm8f\" (UniqueName: \"kubernetes.io/projected/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-kube-api-access-kzm8f\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.376394 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzm8f\" (UniqueName: \"kubernetes.io/projected/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-kube-api-access-kzm8f\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.392533 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cddbv\" (UniqueName: \"kubernetes.io/projected/58a03ccb-63cd-45fe-bc04-71fcc12c3434-kube-api-access-cddbv\") pod \"nmstate-metrics-5dcf9c57c5-b4tpl\" (UID: \"58a03ccb-63cd-45fe-bc04-71fcc12c3434\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.395867 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.396928 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.400740 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zkvbv" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.400949 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.401056 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.415837 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.452149 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.452797 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.452925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-nmstate-lock\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.453039 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89ls\" (UniqueName: \"kubernetes.io/projected/ef57a24c-25d4-481a-8047-af60faef1f37-kube-api-access-q89ls\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.453230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-dbus-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.453449 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-ovs-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.453598 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6bv8\" (UniqueName: \"kubernetes.io/projected/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-kube-api-access-s6bv8\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.555679 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.555917 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: E1125 15:08:28.555846 4806 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 25 15:08:28 crc kubenswrapper[4806]: E1125 15:08:28.556028 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert podName:d7da5810-18e1-4ece-a8d1-a3a7f9c710a4 nodeName:}" failed. No retries permitted until 2025-11-25 15:08:29.056001316 +0000 UTC m=+941.708143727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-glshj" (UID: "d7da5810-18e1-4ece-a8d1-a3a7f9c710a4") : secret "plugin-serving-cert" not found Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556410 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-nmstate-lock\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556531 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89ls\" (UniqueName: \"kubernetes.io/projected/ef57a24c-25d4-481a-8047-af60faef1f37-kube-api-access-q89ls\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-nmstate-lock\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-dbus-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556908 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-ovs-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.557130 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6bv8\" (UniqueName: \"kubernetes.io/projected/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-kube-api-access-s6bv8\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.557238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.556962 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-ovs-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.557561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ef57a24c-25d4-481a-8047-af60faef1f37-dbus-socket\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.557795 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.585415 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89ls\" (UniqueName: \"kubernetes.io/projected/ef57a24c-25d4-481a-8047-af60faef1f37-kube-api-access-q89ls\") pod \"nmstate-handler-8n9rx\" (UID: \"ef57a24c-25d4-481a-8047-af60faef1f37\") " pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.589594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6bv8\" (UniqueName: \"kubernetes.io/projected/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-kube-api-access-s6bv8\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.640535 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f57cb87c5-vbhrz"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.644434 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.654788 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f57cb87c5-vbhrz"] Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659084 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfr6d\" (UniqueName: \"kubernetes.io/projected/7048f64f-a2fe-427a-bd17-b879c423ce62-kube-api-access-nfr6d\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659188 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-trusted-ca-bundle\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659236 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659266 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-service-ca\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659339 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-oauth-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659374 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-console-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.659475 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-oauth-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764259 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-service-ca\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764304 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-oauth-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764349 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-console-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764433 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-oauth-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764463 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfr6d\" (UniqueName: \"kubernetes.io/projected/7048f64f-a2fe-427a-bd17-b879c423ce62-kube-api-access-nfr6d\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.764506 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-trusted-ca-bundle\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.766132 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-service-ca\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.766387 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-trusted-ca-bundle\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.766918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-oauth-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.771628 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7048f64f-a2fe-427a-bd17-b879c423ce62-console-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.776594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-serving-cert\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.777444 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7048f64f-a2fe-427a-bd17-b879c423ce62-console-oauth-config\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.788684 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfr6d\" (UniqueName: \"kubernetes.io/projected/7048f64f-a2fe-427a-bd17-b879c423ce62-kube-api-access-nfr6d\") pod \"console-6f57cb87c5-vbhrz\" (UID: \"7048f64f-a2fe-427a-bd17-b879c423ce62\") " pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.866184 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.871262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/831b49c5-f5fa-4186-8bd0-25b5a3e76a45-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-n8ld5\" (UID: \"831b49c5-f5fa-4186-8bd0-25b5a3e76a45\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.874739 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.883514 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.895759 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl"] Nov 25 15:08:28 crc kubenswrapper[4806]: W1125 15:08:28.913901 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58a03ccb_63cd_45fe_bc04_71fcc12c3434.slice/crio-92e7c8094fda3c1c4214221ea2066b6418730d371ce0cee1381fa165a0e96a0b WatchSource:0}: Error finding container 92e7c8094fda3c1c4214221ea2066b6418730d371ce0cee1381fa165a0e96a0b: Status 404 returned error can't find the container with id 92e7c8094fda3c1c4214221ea2066b6418730d371ce0cee1381fa165a0e96a0b Nov 25 15:08:28 crc kubenswrapper[4806]: I1125 15:08:28.972726 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.073255 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.079265 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7da5810-18e1-4ece-a8d1-a3a7f9c710a4-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-glshj\" (UID: \"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.132623 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" event={"ID":"58a03ccb-63cd-45fe-bc04-71fcc12c3434","Type":"ContainerStarted","Data":"92e7c8094fda3c1c4214221ea2066b6418730d371ce0cee1381fa165a0e96a0b"} Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.137062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8n9rx" event={"ID":"ef57a24c-25d4-481a-8047-af60faef1f37","Type":"ContainerStarted","Data":"4e419e0ae8c0537b96d45b170c39c19fdf9da0fb833ae75379ed17c3a46acfc0"} Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.153199 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5"] Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.209474 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f57cb87c5-vbhrz"] Nov 25 15:08:29 crc kubenswrapper[4806]: W1125 15:08:29.212850 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7048f64f_a2fe_427a_bd17_b879c423ce62.slice/crio-4cb17eb4d2d0efdd2016b0bd99d27770efc95ccd242a4e0d11910ab7b155dd58 WatchSource:0}: Error finding container 4cb17eb4d2d0efdd2016b0bd99d27770efc95ccd242a4e0d11910ab7b155dd58: Status 404 returned error can't find the container with id 4cb17eb4d2d0efdd2016b0bd99d27770efc95ccd242a4e0d11910ab7b155dd58 Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.328210 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" Nov 25 15:08:29 crc kubenswrapper[4806]: I1125 15:08:29.775356 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj"] Nov 25 15:08:30 crc kubenswrapper[4806]: I1125 15:08:30.146895 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" event={"ID":"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4","Type":"ContainerStarted","Data":"71d27aa7801144aae7846fb6e0bd3e48c6e6df97d64f735798deffa71e6d9408"} Nov 25 15:08:30 crc kubenswrapper[4806]: I1125 15:08:30.148880 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f57cb87c5-vbhrz" event={"ID":"7048f64f-a2fe-427a-bd17-b879c423ce62","Type":"ContainerStarted","Data":"c399cfba4a4f0be818b00dd9478b81a4caf84a4fe33d04abd9b11e71e330dd41"} Nov 25 15:08:30 crc kubenswrapper[4806]: I1125 15:08:30.148916 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f57cb87c5-vbhrz" event={"ID":"7048f64f-a2fe-427a-bd17-b879c423ce62","Type":"ContainerStarted","Data":"4cb17eb4d2d0efdd2016b0bd99d27770efc95ccd242a4e0d11910ab7b155dd58"} Nov 25 15:08:30 crc kubenswrapper[4806]: I1125 15:08:30.150506 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" event={"ID":"831b49c5-f5fa-4186-8bd0-25b5a3e76a45","Type":"ContainerStarted","Data":"1a08905b8ed3fbad156ca7999ca865787603ba91db355bcfd5aa4eeccc21b8e9"} Nov 25 15:08:30 crc kubenswrapper[4806]: I1125 15:08:30.187424 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f57cb87c5-vbhrz" podStartSLOduration=2.187385291 podStartE2EDuration="2.187385291s" podCreationTimestamp="2025-11-25 15:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:08:30.177527845 +0000 UTC m=+942.829670256" watchObservedRunningTime="2025-11-25 15:08:30.187385291 +0000 UTC m=+942.839527702" Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.168786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-8n9rx" event={"ID":"ef57a24c-25d4-481a-8047-af60faef1f37","Type":"ContainerStarted","Data":"969fb73e4ce2735d5ffd003bad076eda0846f3b1fa673e6dcaeea15ecb855567"} Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.169811 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.172816 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" event={"ID":"831b49c5-f5fa-4186-8bd0-25b5a3e76a45","Type":"ContainerStarted","Data":"781ac0e4ff9ee3932a64ec1220e8696ee897cb6631bdc37ce47cc02521f59040"} Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.172951 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.175624 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" event={"ID":"58a03ccb-63cd-45fe-bc04-71fcc12c3434","Type":"ContainerStarted","Data":"8ac0a5e83bf3204d62911615921fd9ee963af7c1f115b98ad26d16ba131ec35c"} Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.193433 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-8n9rx" podStartSLOduration=1.536063594 podStartE2EDuration="4.193364593s" podCreationTimestamp="2025-11-25 15:08:28 +0000 UTC" firstStartedPulling="2025-11-25 15:08:28.923342669 +0000 UTC m=+941.575485080" lastFinishedPulling="2025-11-25 15:08:31.580643658 +0000 UTC m=+944.232786079" observedRunningTime="2025-11-25 15:08:32.188451815 +0000 UTC m=+944.840594256" watchObservedRunningTime="2025-11-25 15:08:32.193364593 +0000 UTC m=+944.845507014" Nov 25 15:08:32 crc kubenswrapper[4806]: I1125 15:08:32.225252 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" podStartSLOduration=1.819080192 podStartE2EDuration="4.225208336s" podCreationTimestamp="2025-11-25 15:08:28 +0000 UTC" firstStartedPulling="2025-11-25 15:08:29.175302186 +0000 UTC m=+941.827444597" lastFinishedPulling="2025-11-25 15:08:31.58143031 +0000 UTC m=+944.233572741" observedRunningTime="2025-11-25 15:08:32.212265523 +0000 UTC m=+944.864407934" watchObservedRunningTime="2025-11-25 15:08:32.225208336 +0000 UTC m=+944.877350777" Nov 25 15:08:34 crc kubenswrapper[4806]: I1125 15:08:34.203973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" event={"ID":"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4","Type":"ContainerStarted","Data":"1e6c89c00c0d88ba903374fda05c0f276f02d3a118b2a32201d7dca23a2764fc"} Nov 25 15:08:34 crc kubenswrapper[4806]: I1125 15:08:34.249497 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-glshj" podStartSLOduration=2.88828904 podStartE2EDuration="6.24946467s" podCreationTimestamp="2025-11-25 15:08:28 +0000 UTC" firstStartedPulling="2025-11-25 15:08:29.785948782 +0000 UTC m=+942.438091183" lastFinishedPulling="2025-11-25 15:08:33.147124402 +0000 UTC m=+945.799266813" observedRunningTime="2025-11-25 15:08:34.244531941 +0000 UTC m=+946.896674362" watchObservedRunningTime="2025-11-25 15:08:34.24946467 +0000 UTC m=+946.901607081" Nov 25 15:08:35 crc kubenswrapper[4806]: I1125 15:08:35.212997 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" event={"ID":"58a03ccb-63cd-45fe-bc04-71fcc12c3434","Type":"ContainerStarted","Data":"cbce20fe7b68e769d705df3cf5b7be5e79c9634a65163e252d1a505274ad470d"} Nov 25 15:08:38 crc kubenswrapper[4806]: I1125 15:08:38.918170 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-8n9rx" Nov 25 15:08:38 crc kubenswrapper[4806]: I1125 15:08:38.943979 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-b4tpl" podStartSLOduration=5.171434325 podStartE2EDuration="10.943953716s" podCreationTimestamp="2025-11-25 15:08:28 +0000 UTC" firstStartedPulling="2025-11-25 15:08:28.916917589 +0000 UTC m=+941.569060000" lastFinishedPulling="2025-11-25 15:08:34.68943698 +0000 UTC m=+947.341579391" observedRunningTime="2025-11-25 15:08:35.235110904 +0000 UTC m=+947.887253365" watchObservedRunningTime="2025-11-25 15:08:38.943953716 +0000 UTC m=+951.596096127" Nov 25 15:08:38 crc kubenswrapper[4806]: I1125 15:08:38.973299 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:38 crc kubenswrapper[4806]: I1125 15:08:38.973396 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:38 crc kubenswrapper[4806]: I1125 15:08:38.979902 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:39 crc kubenswrapper[4806]: I1125 15:08:39.262378 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f57cb87c5-vbhrz" Nov 25 15:08:39 crc kubenswrapper[4806]: I1125 15:08:39.320894 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 15:08:48 crc kubenswrapper[4806]: I1125 15:08:48.883279 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-n8ld5" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.072709 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z"] Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.074951 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.077527 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.084365 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z"] Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.227897 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfzxx\" (UniqueName: \"kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.227988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.228022 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.329396 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.329479 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.329616 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfzxx\" (UniqueName: \"kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.330231 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.330277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.352747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfzxx\" (UniqueName: \"kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.364301 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-6j244" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" containerID="cri-o://6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f" gracePeriod=15 Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.401380 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.766042 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6j244_b8400987-b2f7-44fe-b1b3-8689c2465cd3/console/0.log" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.766704 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6j244" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.831461 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z"] Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837126 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837187 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczsc\" (UniqueName: \"kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837373 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837431 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837464 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.837504 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle\") pod \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\" (UID: \"b8400987-b2f7-44fe-b1b3-8689c2465cd3\") " Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.839699 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.840087 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca" (OuterVolumeSpecName: "service-ca") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.840427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.840427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config" (OuterVolumeSpecName: "console-config") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.844750 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.844836 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc" (OuterVolumeSpecName: "kube-api-access-kczsc") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "kube-api-access-kczsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.845139 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "b8400987-b2f7-44fe-b1b3-8689c2465cd3" (UID: "b8400987-b2f7-44fe-b1b3-8689c2465cd3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939271 4806 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939308 4806 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939354 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kczsc\" (UniqueName: \"kubernetes.io/projected/b8400987-b2f7-44fe-b1b3-8689c2465cd3-kube-api-access-kczsc\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939367 4806 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939378 4806 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939390 4806 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8400987-b2f7-44fe-b1b3-8689c2465cd3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:04 crc kubenswrapper[4806]: I1125 15:09:04.939400 4806 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8400987-b2f7-44fe-b1b3-8689c2465cd3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.477563 4806 generic.go:334] "Generic (PLEG): container finished" podID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerID="ddd8fc81c3a709e8bd20e1cd1d04089138a81975d10917e1f8bf9312e2d725ae" exitCode=0 Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.477646 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" event={"ID":"bac0466c-f1d6-4e60-999e-adbc6c533da8","Type":"ContainerDied","Data":"ddd8fc81c3a709e8bd20e1cd1d04089138a81975d10917e1f8bf9312e2d725ae"} Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.478122 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" event={"ID":"bac0466c-f1d6-4e60-999e-adbc6c533da8","Type":"ContainerStarted","Data":"5f8327ae0fa86f86de00fe21a878c7815d9fdd39c690f3b6cf7efac4b3679693"} Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.484638 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-6j244_b8400987-b2f7-44fe-b1b3-8689c2465cd3/console/0.log" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.484763 4806 generic.go:334] "Generic (PLEG): container finished" podID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerID="6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f" exitCode=2 Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.484828 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-6j244" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.484833 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6j244" event={"ID":"b8400987-b2f7-44fe-b1b3-8689c2465cd3","Type":"ContainerDied","Data":"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f"} Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.485000 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-6j244" event={"ID":"b8400987-b2f7-44fe-b1b3-8689c2465cd3","Type":"ContainerDied","Data":"8f7f775dac024ec071ac39fbaac38bb03ffb868677b32fe3aaa6ba31e01f8405"} Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.485035 4806 scope.go:117] "RemoveContainer" containerID="6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.516756 4806 scope.go:117] "RemoveContainer" containerID="6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f" Nov 25 15:09:05 crc kubenswrapper[4806]: E1125 15:09:05.517262 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f\": container with ID starting with 6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f not found: ID does not exist" containerID="6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.517350 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f"} err="failed to get container status \"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f\": rpc error: code = NotFound desc = could not find container \"6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f\": container with ID starting with 6935c418c4e925e08ba3ae221b529a56c5e0c24d3e122dff7dceedb3b8f8876f not found: ID does not exist" Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.530772 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 15:09:05 crc kubenswrapper[4806]: I1125 15:09:05.534208 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-6j244"] Nov 25 15:09:06 crc kubenswrapper[4806]: I1125 15:09:06.103099 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" path="/var/lib/kubelet/pods/b8400987-b2f7-44fe-b1b3-8689c2465cd3/volumes" Nov 25 15:09:07 crc kubenswrapper[4806]: I1125 15:09:07.534479 4806 generic.go:334] "Generic (PLEG): container finished" podID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerID="2b508f61f9986f4dd7d429d67d69512dd14316661c43462ce36334603586eec3" exitCode=0 Nov 25 15:09:07 crc kubenswrapper[4806]: I1125 15:09:07.534578 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" event={"ID":"bac0466c-f1d6-4e60-999e-adbc6c533da8","Type":"ContainerDied","Data":"2b508f61f9986f4dd7d429d67d69512dd14316661c43462ce36334603586eec3"} Nov 25 15:09:08 crc kubenswrapper[4806]: I1125 15:09:08.550655 4806 generic.go:334] "Generic (PLEG): container finished" podID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerID="f910144c30c1f6359a104b965b5950e01ab49c644a0d9af9018cce1a6e4b2083" exitCode=0 Nov 25 15:09:08 crc kubenswrapper[4806]: I1125 15:09:08.550737 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" event={"ID":"bac0466c-f1d6-4e60-999e-adbc6c533da8","Type":"ContainerDied","Data":"f910144c30c1f6359a104b965b5950e01ab49c644a0d9af9018cce1a6e4b2083"} Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.817937 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.916090 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util\") pod \"bac0466c-f1d6-4e60-999e-adbc6c533da8\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.916225 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfzxx\" (UniqueName: \"kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx\") pod \"bac0466c-f1d6-4e60-999e-adbc6c533da8\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.916302 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle\") pod \"bac0466c-f1d6-4e60-999e-adbc6c533da8\" (UID: \"bac0466c-f1d6-4e60-999e-adbc6c533da8\") " Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.917524 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle" (OuterVolumeSpecName: "bundle") pod "bac0466c-f1d6-4e60-999e-adbc6c533da8" (UID: "bac0466c-f1d6-4e60-999e-adbc6c533da8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.924990 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx" (OuterVolumeSpecName: "kube-api-access-kfzxx") pod "bac0466c-f1d6-4e60-999e-adbc6c533da8" (UID: "bac0466c-f1d6-4e60-999e-adbc6c533da8"). InnerVolumeSpecName "kube-api-access-kfzxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:09 crc kubenswrapper[4806]: I1125 15:09:09.938861 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util" (OuterVolumeSpecName: "util") pod "bac0466c-f1d6-4e60-999e-adbc6c533da8" (UID: "bac0466c-f1d6-4e60-999e-adbc6c533da8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.018569 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.018632 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bac0466c-f1d6-4e60-999e-adbc6c533da8-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.018647 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfzxx\" (UniqueName: \"kubernetes.io/projected/bac0466c-f1d6-4e60-999e-adbc6c533da8-kube-api-access-kfzxx\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.566047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" event={"ID":"bac0466c-f1d6-4e60-999e-adbc6c533da8","Type":"ContainerDied","Data":"5f8327ae0fa86f86de00fe21a878c7815d9fdd39c690f3b6cf7efac4b3679693"} Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.566518 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f8327ae0fa86f86de00fe21a878c7815d9fdd39c690f3b6cf7efac4b3679693" Nov 25 15:09:10 crc kubenswrapper[4806]: I1125 15:09:10.566107 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.013037 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57"] Nov 25 15:09:22 crc kubenswrapper[4806]: E1125 15:09:22.014193 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="extract" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014213 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="extract" Nov 25 15:09:22 crc kubenswrapper[4806]: E1125 15:09:22.014225 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="util" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014231 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="util" Nov 25 15:09:22 crc kubenswrapper[4806]: E1125 15:09:22.014248 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014255 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" Nov 25 15:09:22 crc kubenswrapper[4806]: E1125 15:09:22.014273 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="pull" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014278 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="pull" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014396 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac0466c-f1d6-4e60-999e-adbc6c533da8" containerName="extract" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014412 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8400987-b2f7-44fe-b1b3-8689c2465cd3" containerName="console" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.014999 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.017624 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.022993 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.023049 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.023123 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.030473 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2rdn6" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.047683 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57"] Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.151627 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gkj2\" (UniqueName: \"kubernetes.io/projected/55283d70-ea30-4f51-8583-6d1adc92cfcb-kube-api-access-7gkj2\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.151761 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-webhook-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.151838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-apiservice-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.253648 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-webhook-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.253734 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-apiservice-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.253806 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gkj2\" (UniqueName: \"kubernetes.io/projected/55283d70-ea30-4f51-8583-6d1adc92cfcb-kube-api-access-7gkj2\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.261903 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-webhook-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.262170 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/55283d70-ea30-4f51-8583-6d1adc92cfcb-apiservice-cert\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.273640 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gkj2\" (UniqueName: \"kubernetes.io/projected/55283d70-ea30-4f51-8583-6d1adc92cfcb-kube-api-access-7gkj2\") pod \"metallb-operator-controller-manager-769f4c6fc-r7k57\" (UID: \"55283d70-ea30-4f51-8583-6d1adc92cfcb\") " pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.338684 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.438501 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d556674f-758vc"] Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.439641 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.441742 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.441994 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-t7ffj" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.442303 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.460555 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d556674f-758vc"] Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.560901 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hpb\" (UniqueName: \"kubernetes.io/projected/a3fdc89c-e782-48b8-bfaa-f3bd81956672-kube-api-access-25hpb\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.561017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-apiservice-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.561079 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-webhook-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.664521 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25hpb\" (UniqueName: \"kubernetes.io/projected/a3fdc89c-e782-48b8-bfaa-f3bd81956672-kube-api-access-25hpb\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.665092 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-apiservice-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.665161 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-webhook-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.673703 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-webhook-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.678740 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3fdc89c-e782-48b8-bfaa-f3bd81956672-apiservice-cert\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.694169 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25hpb\" (UniqueName: \"kubernetes.io/projected/a3fdc89c-e782-48b8-bfaa-f3bd81956672-kube-api-access-25hpb\") pod \"metallb-operator-webhook-server-58d556674f-758vc\" (UID: \"a3fdc89c-e782-48b8-bfaa-f3bd81956672\") " pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.720729 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57"] Nov 25 15:09:22 crc kubenswrapper[4806]: I1125 15:09:22.761225 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:23 crc kubenswrapper[4806]: I1125 15:09:23.069599 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58d556674f-758vc"] Nov 25 15:09:23 crc kubenswrapper[4806]: W1125 15:09:23.083619 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3fdc89c_e782_48b8_bfaa_f3bd81956672.slice/crio-c1d6bb82d25cf6f55978cfd8b1345cdd8137452c8548e19c1d404a8effa8aa4a WatchSource:0}: Error finding container c1d6bb82d25cf6f55978cfd8b1345cdd8137452c8548e19c1d404a8effa8aa4a: Status 404 returned error can't find the container with id c1d6bb82d25cf6f55978cfd8b1345cdd8137452c8548e19c1d404a8effa8aa4a Nov 25 15:09:23 crc kubenswrapper[4806]: I1125 15:09:23.688943 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" event={"ID":"a3fdc89c-e782-48b8-bfaa-f3bd81956672","Type":"ContainerStarted","Data":"c1d6bb82d25cf6f55978cfd8b1345cdd8137452c8548e19c1d404a8effa8aa4a"} Nov 25 15:09:23 crc kubenswrapper[4806]: I1125 15:09:23.690341 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerStarted","Data":"841d68562aac2844427b10056c47064b490ff627210a24c21625c995e3d64e73"} Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.783964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" event={"ID":"a3fdc89c-e782-48b8-bfaa-f3bd81956672","Type":"ContainerStarted","Data":"2d38cb7e6bc6d136099fc7dea9b1f437414c371ae8c73e4b4b27c305ce4766bf"} Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.786639 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.792083 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerStarted","Data":"a52aaad66e565ea72628b7272378fe64e2521d50f3339a29c2bd6a5cd0460ffe"} Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.792234 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.813433 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" podStartSLOduration=1.353490511 podStartE2EDuration="6.813390745s" podCreationTimestamp="2025-11-25 15:09:22 +0000 UTC" firstStartedPulling="2025-11-25 15:09:23.088810888 +0000 UTC m=+995.740953299" lastFinishedPulling="2025-11-25 15:09:28.548711122 +0000 UTC m=+1001.200853533" observedRunningTime="2025-11-25 15:09:28.812167891 +0000 UTC m=+1001.464310312" watchObservedRunningTime="2025-11-25 15:09:28.813390745 +0000 UTC m=+1001.465533166" Nov 25 15:09:28 crc kubenswrapper[4806]: I1125 15:09:28.850938 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podStartSLOduration=2.380725796 podStartE2EDuration="7.850910787s" podCreationTimestamp="2025-11-25 15:09:21 +0000 UTC" firstStartedPulling="2025-11-25 15:09:22.774603316 +0000 UTC m=+995.426745727" lastFinishedPulling="2025-11-25 15:09:28.244788297 +0000 UTC m=+1000.896930718" observedRunningTime="2025-11-25 15:09:28.849895409 +0000 UTC m=+1001.502037820" watchObservedRunningTime="2025-11-25 15:09:28.850910787 +0000 UTC m=+1001.503053198" Nov 25 15:09:42 crc kubenswrapper[4806]: I1125 15:09:42.766946 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58d556674f-758vc" Nov 25 15:09:48 crc kubenswrapper[4806]: I1125 15:09:48.934758 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:09:48 crc kubenswrapper[4806]: I1125 15:09:48.935680 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:02 crc kubenswrapper[4806]: I1125 15:10:02.343002 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.085755 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9tr2r"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.090478 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.097298 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.097714 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.098164 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-k6wwk" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.099417 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-j9plw"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.100994 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.107698 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.113540 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-j9plw"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142212 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142325 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-conf\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-sockets\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142484 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9zss\" (UniqueName: \"kubernetes.io/projected/eb6c6179-82f5-4796-a12a-4806c8df1edd-kube-api-access-f9zss\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142546 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-reloader\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.142572 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-startup\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.235453 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2pzk8"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.236564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.241300 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.241403 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ztcj7" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244077 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-conf\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244140 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64x9m\" (UniqueName: \"kubernetes.io/projected/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-kube-api-access-64x9m\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244196 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-sockets\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244240 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244272 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9zss\" (UniqueName: \"kubernetes.io/projected/eb6c6179-82f5-4796-a12a-4806c8df1edd-kube-api-access-f9zss\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244300 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-reloader\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244348 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-startup\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244403 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-cert\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244456 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244599 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-conf\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244933 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.244973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-sockets\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.245113 4806 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.245183 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs podName:eb6c6179-82f5-4796-a12a-4806c8df1edd nodeName:}" failed. No retries permitted until 2025-11-25 15:10:03.745164613 +0000 UTC m=+1036.397307024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs") pod "frr-k8s-9tr2r" (UID: "eb6c6179-82f5-4796-a12a-4806c8df1edd") : secret "frr-k8s-certs-secret" not found Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.245212 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/eb6c6179-82f5-4796-a12a-4806c8df1edd-reloader\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.246067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/eb6c6179-82f5-4796-a12a-4806c8df1edd-frr-startup\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.246365 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.251689 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.263690 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-fv59r"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.264864 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.268308 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.284405 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9zss\" (UniqueName: \"kubernetes.io/projected/eb6c6179-82f5-4796-a12a-4806c8df1edd-kube-api-access-f9zss\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.294415 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fv59r"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.345766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-cert\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.345864 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf52d\" (UniqueName: \"kubernetes.io/projected/809591af-3272-4a5d-bd90-d6cba5c6e3a0-kube-api-access-zf52d\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.345932 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64x9m\" (UniqueName: \"kubernetes.io/projected/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-kube-api-access-64x9m\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.345995 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metrics-certs\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.346035 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.346073 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metallb-excludel2\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.355818 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-cert\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.375988 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64x9m\" (UniqueName: \"kubernetes.io/projected/ccb2a08d-4f13-4e28-a6e2-1af712c00eaf-kube-api-access-64x9m\") pod \"frr-k8s-webhook-server-6998585d5-j9plw\" (UID: \"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.432264 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447735 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metrics-certs\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447816 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frggf\" (UniqueName: \"kubernetes.io/projected/66652e87-4308-4216-880d-bfba98261288-kube-api-access-frggf\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447841 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447872 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-cert\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447897 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447917 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metallb-excludel2\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.447969 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf52d\" (UniqueName: \"kubernetes.io/projected/809591af-3272-4a5d-bd90-d6cba5c6e3a0-kube-api-access-zf52d\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.449113 4806 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.449171 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist podName:809591af-3272-4a5d-bd90-d6cba5c6e3a0 nodeName:}" failed. No retries permitted until 2025-11-25 15:10:03.949151414 +0000 UTC m=+1036.601293815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist") pod "speaker-2pzk8" (UID: "809591af-3272-4a5d-bd90-d6cba5c6e3a0") : secret "metallb-memberlist" not found Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.450014 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metallb-excludel2\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.453913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-metrics-certs\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.483338 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf52d\" (UniqueName: \"kubernetes.io/projected/809591af-3272-4a5d-bd90-d6cba5c6e3a0-kube-api-access-zf52d\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.549302 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frggf\" (UniqueName: \"kubernetes.io/projected/66652e87-4308-4216-880d-bfba98261288-kube-api-access-frggf\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.549419 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-cert\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.549465 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.549632 4806 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.549709 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs podName:66652e87-4308-4216-880d-bfba98261288 nodeName:}" failed. No retries permitted until 2025-11-25 15:10:04.049689014 +0000 UTC m=+1036.701831425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs") pod "controller-6c7b4b5f48-fv59r" (UID: "66652e87-4308-4216-880d-bfba98261288") : secret "controller-certs-secret" not found Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.552111 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.567239 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-cert\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.596094 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frggf\" (UniqueName: \"kubernetes.io/projected/66652e87-4308-4216-880d-bfba98261288-kube-api-access-frggf\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.757429 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.760844 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/eb6c6179-82f5-4796-a12a-4806c8df1edd-metrics-certs\") pod \"frr-k8s-9tr2r\" (UID: \"eb6c6179-82f5-4796-a12a-4806c8df1edd\") " pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.918777 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-j9plw"] Nov 25 15:10:03 crc kubenswrapper[4806]: I1125 15:10:03.960153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.960432 4806 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 15:10:03 crc kubenswrapper[4806]: E1125 15:10:03.960576 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist podName:809591af-3272-4a5d-bd90-d6cba5c6e3a0 nodeName:}" failed. No retries permitted until 2025-11-25 15:10:04.960547517 +0000 UTC m=+1037.612689928 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist") pod "speaker-2pzk8" (UID: "809591af-3272-4a5d-bd90-d6cba5c6e3a0") : secret "metallb-memberlist" not found Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.020640 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.062235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.066965 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66652e87-4308-4216-880d-bfba98261288-metrics-certs\") pod \"controller-6c7b4b5f48-fv59r\" (UID: \"66652e87-4308-4216-880d-bfba98261288\") " pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.080008 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" event={"ID":"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf","Type":"ContainerStarted","Data":"61b988fa583f291850119350e45e1b549d90bdd805fc7159f5b029b2f831ab84"} Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.234803 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.642334 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-fv59r"] Nov 25 15:10:04 crc kubenswrapper[4806]: W1125 15:10:04.658580 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66652e87_4308_4216_880d_bfba98261288.slice/crio-a2af5fed6e1c7a0057b4b2828a7108ae399bd1a0e0eb1eaff863b3f5cc8cc3a6 WatchSource:0}: Error finding container a2af5fed6e1c7a0057b4b2828a7108ae399bd1a0e0eb1eaff863b3f5cc8cc3a6: Status 404 returned error can't find the container with id a2af5fed6e1c7a0057b4b2828a7108ae399bd1a0e0eb1eaff863b3f5cc8cc3a6 Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.978872 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:04 crc kubenswrapper[4806]: I1125 15:10:04.987577 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/809591af-3272-4a5d-bd90-d6cba5c6e3a0-memberlist\") pod \"speaker-2pzk8\" (UID: \"809591af-3272-4a5d-bd90-d6cba5c6e3a0\") " pod="metallb-system/speaker-2pzk8" Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.056417 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2pzk8" Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.090197 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"f1f321dbb0db5b8fdccbf2594e0a687205e788b07dc3a9ece31bce32cb749800"} Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.093435 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fv59r" event={"ID":"66652e87-4308-4216-880d-bfba98261288","Type":"ContainerStarted","Data":"97984abf78c5b33e9bf876208bd135d3650786749b484bda5ed17fd41ba29cf4"} Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.093502 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fv59r" event={"ID":"66652e87-4308-4216-880d-bfba98261288","Type":"ContainerStarted","Data":"44b32a65fe6ba68b4e88a92a6d27aa4cc9b11588fd8e2112d0728eeeb00ec4ff"} Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.093513 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-fv59r" event={"ID":"66652e87-4308-4216-880d-bfba98261288","Type":"ContainerStarted","Data":"a2af5fed6e1c7a0057b4b2828a7108ae399bd1a0e0eb1eaff863b3f5cc8cc3a6"} Nov 25 15:10:05 crc kubenswrapper[4806]: I1125 15:10:05.093692 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.106735 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2pzk8" event={"ID":"809591af-3272-4a5d-bd90-d6cba5c6e3a0","Type":"ContainerStarted","Data":"26d9a81b4baa0fb321b387a8e5b62c7db6584178650c3004f7dfaab5b070e000"} Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.107286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2pzk8" event={"ID":"809591af-3272-4a5d-bd90-d6cba5c6e3a0","Type":"ContainerStarted","Data":"7cfd8cbe667c92b37a60a421124e721f94e80229bd243230089f23c274e934bb"} Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.107307 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2pzk8" event={"ID":"809591af-3272-4a5d-bd90-d6cba5c6e3a0","Type":"ContainerStarted","Data":"f09fc3e08041b8357be0a49f96adb6f35dd3589074fc57882072f937f0e49f91"} Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.107619 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2pzk8" Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.138926 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2pzk8" podStartSLOduration=3.138901125 podStartE2EDuration="3.138901125s" podCreationTimestamp="2025-11-25 15:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:10:06.136454396 +0000 UTC m=+1038.788596807" watchObservedRunningTime="2025-11-25 15:10:06.138901125 +0000 UTC m=+1038.791043536" Nov 25 15:10:06 crc kubenswrapper[4806]: I1125 15:10:06.143064 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-fv59r" podStartSLOduration=3.143047711 podStartE2EDuration="3.143047711s" podCreationTimestamp="2025-11-25 15:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:10:05.15683827 +0000 UTC m=+1037.808980691" watchObservedRunningTime="2025-11-25 15:10:06.143047711 +0000 UTC m=+1038.795190122" Nov 25 15:10:12 crc kubenswrapper[4806]: I1125 15:10:12.157346 4806 generic.go:334] "Generic (PLEG): container finished" podID="eb6c6179-82f5-4796-a12a-4806c8df1edd" containerID="a8f4bb1a37de378ba5efb0cde9811a6062f75ab61c8e81b420c19de4cfa8914c" exitCode=0 Nov 25 15:10:12 crc kubenswrapper[4806]: I1125 15:10:12.157489 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerDied","Data":"a8f4bb1a37de378ba5efb0cde9811a6062f75ab61c8e81b420c19de4cfa8914c"} Nov 25 15:10:12 crc kubenswrapper[4806]: I1125 15:10:12.161140 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" event={"ID":"ccb2a08d-4f13-4e28-a6e2-1af712c00eaf","Type":"ContainerStarted","Data":"c0d87bad9841808e1e271e50fa80c37bff59c5e82b023bdaa435d23ca581ca9e"} Nov 25 15:10:12 crc kubenswrapper[4806]: I1125 15:10:12.161769 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:12 crc kubenswrapper[4806]: I1125 15:10:12.212795 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" podStartSLOduration=1.694741767 podStartE2EDuration="9.212758158s" podCreationTimestamp="2025-11-25 15:10:03 +0000 UTC" firstStartedPulling="2025-11-25 15:10:03.930751331 +0000 UTC m=+1036.582893742" lastFinishedPulling="2025-11-25 15:10:11.448767722 +0000 UTC m=+1044.100910133" observedRunningTime="2025-11-25 15:10:12.205050921 +0000 UTC m=+1044.857193342" watchObservedRunningTime="2025-11-25 15:10:12.212758158 +0000 UTC m=+1044.864900569" Nov 25 15:10:13 crc kubenswrapper[4806]: I1125 15:10:13.173748 4806 generic.go:334] "Generic (PLEG): container finished" podID="eb6c6179-82f5-4796-a12a-4806c8df1edd" containerID="e88fa8d2f1d00974ed65a7fe1ad98adc013f5773a570d3cbcdf007b5a6a99231" exitCode=0 Nov 25 15:10:13 crc kubenswrapper[4806]: I1125 15:10:13.173857 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerDied","Data":"e88fa8d2f1d00974ed65a7fe1ad98adc013f5773a570d3cbcdf007b5a6a99231"} Nov 25 15:10:14 crc kubenswrapper[4806]: I1125 15:10:14.186947 4806 generic.go:334] "Generic (PLEG): container finished" podID="eb6c6179-82f5-4796-a12a-4806c8df1edd" containerID="484642267aa8a15166c5132d0f74436647e1bd5bfde880c519b659d811d68b96" exitCode=0 Nov 25 15:10:14 crc kubenswrapper[4806]: I1125 15:10:14.187001 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerDied","Data":"484642267aa8a15166c5132d0f74436647e1bd5bfde880c519b659d811d68b96"} Nov 25 15:10:14 crc kubenswrapper[4806]: I1125 15:10:14.241298 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-fv59r" Nov 25 15:10:15 crc kubenswrapper[4806]: I1125 15:10:15.061712 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2pzk8" Nov 25 15:10:15 crc kubenswrapper[4806]: I1125 15:10:15.201258 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"e9128e6c2b94581f993cdd270bb0f7bf7128d815723f023d51cdcf498b001952"} Nov 25 15:10:15 crc kubenswrapper[4806]: I1125 15:10:15.201335 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"01c75e06d6d8b22c7914afdea78a08a5e75171cdfc38fcc4ed76ca64d0115f66"} Nov 25 15:10:15 crc kubenswrapper[4806]: I1125 15:10:15.201346 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"397800471d1a105d839228b34a69196a99feac49004fc25e721520530b549ab8"} Nov 25 15:10:15 crc kubenswrapper[4806]: I1125 15:10:15.201356 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"c84a6d7e44f0c1ca3a02c8a2584865ddd11e3c735ade54e16cc1effdd95fd2df"} Nov 25 15:10:16 crc kubenswrapper[4806]: I1125 15:10:16.220504 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"56409273428d8fa42e30d08c2cafa379dfd244fa3bd6456be10d6fadd19b62c7"} Nov 25 15:10:16 crc kubenswrapper[4806]: I1125 15:10:16.220572 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9tr2r" event={"ID":"eb6c6179-82f5-4796-a12a-4806c8df1edd","Type":"ContainerStarted","Data":"06979a5d78d58b58933368ab1f89edcc70f25428755c4887c09ca5f8e6fe9f61"} Nov 25 15:10:16 crc kubenswrapper[4806]: I1125 15:10:16.220773 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:16 crc kubenswrapper[4806]: I1125 15:10:16.250064 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9tr2r" podStartSLOduration=5.93321772 podStartE2EDuration="13.250032569s" podCreationTimestamp="2025-11-25 15:10:03 +0000 UTC" firstStartedPulling="2025-11-25 15:10:04.142718576 +0000 UTC m=+1036.794860987" lastFinishedPulling="2025-11-25 15:10:11.459533425 +0000 UTC m=+1044.111675836" observedRunningTime="2025-11-25 15:10:16.247055355 +0000 UTC m=+1048.899197766" watchObservedRunningTime="2025-11-25 15:10:16.250032569 +0000 UTC m=+1048.902174980" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.185568 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.186921 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.189397 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8vvgg" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.190820 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.191072 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.223426 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vd7\" (UniqueName: \"kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7\") pod \"openstack-operator-index-cvhks\" (UID: \"088dff44-39aa-496a-a641-232bd6891ebf\") " pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.239115 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.326140 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vd7\" (UniqueName: \"kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7\") pod \"openstack-operator-index-cvhks\" (UID: \"088dff44-39aa-496a-a641-232bd6891ebf\") " pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.349416 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vd7\" (UniqueName: \"kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7\") pod \"openstack-operator-index-cvhks\" (UID: \"088dff44-39aa-496a-a641-232bd6891ebf\") " pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.552995 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.788102 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:18 crc kubenswrapper[4806]: W1125 15:10:18.799598 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod088dff44_39aa_496a_a641_232bd6891ebf.slice/crio-f1ab929e889a78cafdecd50e3c60fd2e0932712674d1f223a70dd71d12d72c46 WatchSource:0}: Error finding container f1ab929e889a78cafdecd50e3c60fd2e0932712674d1f223a70dd71d12d72c46: Status 404 returned error can't find the container with id f1ab929e889a78cafdecd50e3c60fd2e0932712674d1f223a70dd71d12d72c46 Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.934766 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:18 crc kubenswrapper[4806]: I1125 15:10:18.934848 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:19 crc kubenswrapper[4806]: I1125 15:10:19.021836 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:19 crc kubenswrapper[4806]: I1125 15:10:19.071108 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:19 crc kubenswrapper[4806]: I1125 15:10:19.247308 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvhks" event={"ID":"088dff44-39aa-496a-a641-232bd6891ebf","Type":"ContainerStarted","Data":"f1ab929e889a78cafdecd50e3c60fd2e0932712674d1f223a70dd71d12d72c46"} Nov 25 15:10:21 crc kubenswrapper[4806]: I1125 15:10:21.557365 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.303466 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-csjwd"] Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.305055 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.361385 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-csjwd"] Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.487930 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwts6\" (UniqueName: \"kubernetes.io/projected/54ffd9a7-4d3c-4e19-855a-8f54e7d9d513-kube-api-access-cwts6\") pod \"openstack-operator-index-csjwd\" (UID: \"54ffd9a7-4d3c-4e19-855a-8f54e7d9d513\") " pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.590093 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwts6\" (UniqueName: \"kubernetes.io/projected/54ffd9a7-4d3c-4e19-855a-8f54e7d9d513-kube-api-access-cwts6\") pod \"openstack-operator-index-csjwd\" (UID: \"54ffd9a7-4d3c-4e19-855a-8f54e7d9d513\") " pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.616652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwts6\" (UniqueName: \"kubernetes.io/projected/54ffd9a7-4d3c-4e19-855a-8f54e7d9d513-kube-api-access-cwts6\") pod \"openstack-operator-index-csjwd\" (UID: \"54ffd9a7-4d3c-4e19-855a-8f54e7d9d513\") " pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:22 crc kubenswrapper[4806]: I1125 15:10:22.638388 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:23 crc kubenswrapper[4806]: I1125 15:10:23.444656 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-j9plw" Nov 25 15:10:24 crc kubenswrapper[4806]: I1125 15:10:24.024034 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9tr2r" Nov 25 15:10:25 crc kubenswrapper[4806]: I1125 15:10:25.642811 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-csjwd"] Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.310465 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-csjwd" event={"ID":"54ffd9a7-4d3c-4e19-855a-8f54e7d9d513","Type":"ContainerStarted","Data":"2b217227668a4b856c397cd0e64f74e0abbfd8820e59cc489c1cefb939b1b0e4"} Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.310539 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-csjwd" event={"ID":"54ffd9a7-4d3c-4e19-855a-8f54e7d9d513","Type":"ContainerStarted","Data":"f4f2f739716c7c3fc96a5bff0903575d92ad98dc4b318d32d3ab58af426fb00d"} Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.326782 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvhks" event={"ID":"088dff44-39aa-496a-a641-232bd6891ebf","Type":"ContainerStarted","Data":"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e"} Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.327057 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cvhks" podUID="088dff44-39aa-496a-a641-232bd6891ebf" containerName="registry-server" containerID="cri-o://ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e" gracePeriod=2 Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.349153 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-csjwd" podStartSLOduration=4.293910891 podStartE2EDuration="4.349126204s" podCreationTimestamp="2025-11-25 15:10:22 +0000 UTC" firstStartedPulling="2025-11-25 15:10:25.652780342 +0000 UTC m=+1058.304922753" lastFinishedPulling="2025-11-25 15:10:25.707995655 +0000 UTC m=+1058.360138066" observedRunningTime="2025-11-25 15:10:26.341897741 +0000 UTC m=+1058.994040172" watchObservedRunningTime="2025-11-25 15:10:26.349126204 +0000 UTC m=+1059.001268615" Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.368109 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cvhks" podStartSLOduration=1.953273892 podStartE2EDuration="8.368086297s" podCreationTimestamp="2025-11-25 15:10:18 +0000 UTC" firstStartedPulling="2025-11-25 15:10:18.802802453 +0000 UTC m=+1051.454944864" lastFinishedPulling="2025-11-25 15:10:25.217614868 +0000 UTC m=+1057.869757269" observedRunningTime="2025-11-25 15:10:26.364347472 +0000 UTC m=+1059.016489903" watchObservedRunningTime="2025-11-25 15:10:26.368086297 +0000 UTC m=+1059.020228708" Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.777903 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.966884 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2vd7\" (UniqueName: \"kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7\") pod \"088dff44-39aa-496a-a641-232bd6891ebf\" (UID: \"088dff44-39aa-496a-a641-232bd6891ebf\") " Nov 25 15:10:26 crc kubenswrapper[4806]: I1125 15:10:26.976666 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7" (OuterVolumeSpecName: "kube-api-access-z2vd7") pod "088dff44-39aa-496a-a641-232bd6891ebf" (UID: "088dff44-39aa-496a-a641-232bd6891ebf"). InnerVolumeSpecName "kube-api-access-z2vd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.068918 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2vd7\" (UniqueName: \"kubernetes.io/projected/088dff44-39aa-496a-a641-232bd6891ebf-kube-api-access-z2vd7\") on node \"crc\" DevicePath \"\"" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.337404 4806 generic.go:334] "Generic (PLEG): container finished" podID="088dff44-39aa-496a-a641-232bd6891ebf" containerID="ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e" exitCode=0 Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.337473 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvhks" event={"ID":"088dff44-39aa-496a-a641-232bd6891ebf","Type":"ContainerDied","Data":"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e"} Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.337546 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cvhks" event={"ID":"088dff44-39aa-496a-a641-232bd6891ebf","Type":"ContainerDied","Data":"f1ab929e889a78cafdecd50e3c60fd2e0932712674d1f223a70dd71d12d72c46"} Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.337569 4806 scope.go:117] "RemoveContainer" containerID="ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.338093 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cvhks" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.359680 4806 scope.go:117] "RemoveContainer" containerID="ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e" Nov 25 15:10:27 crc kubenswrapper[4806]: E1125 15:10:27.360509 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e\": container with ID starting with ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e not found: ID does not exist" containerID="ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.360598 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e"} err="failed to get container status \"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e\": rpc error: code = NotFound desc = could not find container \"ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e\": container with ID starting with ac46c1fde2d94a3659252a3ca7cead7a172ec4cb1d3e15c92a7fb3d2bd4b5a0e not found: ID does not exist" Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.372238 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:27 crc kubenswrapper[4806]: I1125 15:10:27.378567 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cvhks"] Nov 25 15:10:28 crc kubenswrapper[4806]: I1125 15:10:28.098401 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="088dff44-39aa-496a-a641-232bd6891ebf" path="/var/lib/kubelet/pods/088dff44-39aa-496a-a641-232bd6891ebf/volumes" Nov 25 15:10:32 crc kubenswrapper[4806]: I1125 15:10:32.638610 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:32 crc kubenswrapper[4806]: I1125 15:10:32.639614 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:32 crc kubenswrapper[4806]: I1125 15:10:32.675690 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:33 crc kubenswrapper[4806]: I1125 15:10:33.425979 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-csjwd" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.406814 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg"] Nov 25 15:10:41 crc kubenswrapper[4806]: E1125 15:10:41.407728 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="088dff44-39aa-496a-a641-232bd6891ebf" containerName="registry-server" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.407743 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="088dff44-39aa-496a-a641-232bd6891ebf" containerName="registry-server" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.407861 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="088dff44-39aa-496a-a641-232bd6891ebf" containerName="registry-server" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.408773 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.413124 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-7q6kx" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.424173 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg"] Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.501755 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.501912 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.502001 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjphl\" (UniqueName: \"kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.603305 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjphl\" (UniqueName: \"kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.603419 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.603449 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.604057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.604206 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.628500 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjphl\" (UniqueName: \"kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl\") pod \"d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:41 crc kubenswrapper[4806]: I1125 15:10:41.734198 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:42 crc kubenswrapper[4806]: I1125 15:10:42.178569 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg"] Nov 25 15:10:42 crc kubenswrapper[4806]: W1125 15:10:42.185168 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod916f8aac_10d3_4065_89bc_1d935732c91e.slice/crio-0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6 WatchSource:0}: Error finding container 0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6: Status 404 returned error can't find the container with id 0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6 Nov 25 15:10:42 crc kubenswrapper[4806]: I1125 15:10:42.479477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" event={"ID":"916f8aac-10d3-4065-89bc-1d935732c91e","Type":"ContainerStarted","Data":"0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6"} Nov 25 15:10:43 crc kubenswrapper[4806]: I1125 15:10:43.489158 4806 generic.go:334] "Generic (PLEG): container finished" podID="916f8aac-10d3-4065-89bc-1d935732c91e" containerID="b2a487b808802b7c1ce8cd7b6ed0260052d6094b99d0d641894cce6a68e0f9a7" exitCode=0 Nov 25 15:10:43 crc kubenswrapper[4806]: I1125 15:10:43.489406 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" event={"ID":"916f8aac-10d3-4065-89bc-1d935732c91e","Type":"ContainerDied","Data":"b2a487b808802b7c1ce8cd7b6ed0260052d6094b99d0d641894cce6a68e0f9a7"} Nov 25 15:10:43 crc kubenswrapper[4806]: I1125 15:10:43.491714 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:10:47 crc kubenswrapper[4806]: I1125 15:10:47.528712 4806 generic.go:334] "Generic (PLEG): container finished" podID="916f8aac-10d3-4065-89bc-1d935732c91e" containerID="0f697626492d2cd1a648e31f9c3e8b9c2d5bb12d253873c34a4a2ef4011f0e6c" exitCode=0 Nov 25 15:10:47 crc kubenswrapper[4806]: I1125 15:10:47.528828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" event={"ID":"916f8aac-10d3-4065-89bc-1d935732c91e","Type":"ContainerDied","Data":"0f697626492d2cd1a648e31f9c3e8b9c2d5bb12d253873c34a4a2ef4011f0e6c"} Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.541206 4806 generic.go:334] "Generic (PLEG): container finished" podID="916f8aac-10d3-4065-89bc-1d935732c91e" containerID="9ad601be963cfd766edaa118a2c970e487fc95c6bbe6982813a1f36d8aef7ab5" exitCode=0 Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.541268 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" event={"ID":"916f8aac-10d3-4065-89bc-1d935732c91e","Type":"ContainerDied","Data":"9ad601be963cfd766edaa118a2c970e487fc95c6bbe6982813a1f36d8aef7ab5"} Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.934772 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.934842 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.934921 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.935569 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:10:48 crc kubenswrapper[4806]: I1125 15:10:48.935632 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5" gracePeriod=600 Nov 25 15:10:49 crc kubenswrapper[4806]: I1125 15:10:49.555992 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5" exitCode=0 Nov 25 15:10:49 crc kubenswrapper[4806]: I1125 15:10:49.556097 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5"} Nov 25 15:10:49 crc kubenswrapper[4806]: I1125 15:10:49.557192 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4"} Nov 25 15:10:49 crc kubenswrapper[4806]: I1125 15:10:49.557242 4806 scope.go:117] "RemoveContainer" containerID="86d8b6d9b2cb5c32be187803dad37de53c56e8b8e0993ab0429e9374ef8c5d27" Nov 25 15:10:49 crc kubenswrapper[4806]: I1125 15:10:49.891071 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.048890 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjphl\" (UniqueName: \"kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl\") pod \"916f8aac-10d3-4065-89bc-1d935732c91e\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.049021 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle\") pod \"916f8aac-10d3-4065-89bc-1d935732c91e\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.049201 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util\") pod \"916f8aac-10d3-4065-89bc-1d935732c91e\" (UID: \"916f8aac-10d3-4065-89bc-1d935732c91e\") " Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.050214 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle" (OuterVolumeSpecName: "bundle") pod "916f8aac-10d3-4065-89bc-1d935732c91e" (UID: "916f8aac-10d3-4065-89bc-1d935732c91e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.057998 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl" (OuterVolumeSpecName: "kube-api-access-bjphl") pod "916f8aac-10d3-4065-89bc-1d935732c91e" (UID: "916f8aac-10d3-4065-89bc-1d935732c91e"). InnerVolumeSpecName "kube-api-access-bjphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.062642 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util" (OuterVolumeSpecName: "util") pod "916f8aac-10d3-4065-89bc-1d935732c91e" (UID: "916f8aac-10d3-4065-89bc-1d935732c91e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.151383 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjphl\" (UniqueName: \"kubernetes.io/projected/916f8aac-10d3-4065-89bc-1d935732c91e-kube-api-access-bjphl\") on node \"crc\" DevicePath \"\"" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.151520 4806 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.151540 4806 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/916f8aac-10d3-4065-89bc-1d935732c91e-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.568617 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" event={"ID":"916f8aac-10d3-4065-89bc-1d935732c91e","Type":"ContainerDied","Data":"0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6"} Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.568704 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cf754857aa3b50940a510a468ca3d007fe38099d2b6dd5995e64e76f2d317f6" Nov 25 15:10:50 crc kubenswrapper[4806]: I1125 15:10:50.568660 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.054832 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf"] Nov 25 15:10:54 crc kubenswrapper[4806]: E1125 15:10:54.056191 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="pull" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.056218 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="pull" Nov 25 15:10:54 crc kubenswrapper[4806]: E1125 15:10:54.056230 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="extract" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.056237 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="extract" Nov 25 15:10:54 crc kubenswrapper[4806]: E1125 15:10:54.056253 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="util" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.056260 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="util" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.056441 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="916f8aac-10d3-4065-89bc-1d935732c91e" containerName="extract" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.057093 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.062019 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-hhdgn" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.085127 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf"] Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.217163 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mg7\" (UniqueName: \"kubernetes.io/projected/8fe87500-5164-48de-a495-f6d74b05b7f9-kube-api-access-99mg7\") pod \"openstack-operator-controller-operator-779bfcf6cb-zxvzf\" (UID: \"8fe87500-5164-48de-a495-f6d74b05b7f9\") " pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.319197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mg7\" (UniqueName: \"kubernetes.io/projected/8fe87500-5164-48de-a495-f6d74b05b7f9-kube-api-access-99mg7\") pod \"openstack-operator-controller-operator-779bfcf6cb-zxvzf\" (UID: \"8fe87500-5164-48de-a495-f6d74b05b7f9\") " pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.342338 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mg7\" (UniqueName: \"kubernetes.io/projected/8fe87500-5164-48de-a495-f6d74b05b7f9-kube-api-access-99mg7\") pod \"openstack-operator-controller-operator-779bfcf6cb-zxvzf\" (UID: \"8fe87500-5164-48de-a495-f6d74b05b7f9\") " pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.423233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:10:54 crc kubenswrapper[4806]: I1125 15:10:54.737998 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf"] Nov 25 15:10:55 crc kubenswrapper[4806]: I1125 15:10:55.626725 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" event={"ID":"8fe87500-5164-48de-a495-f6d74b05b7f9","Type":"ContainerStarted","Data":"f289c011209f20b12825ae123bd515d946f6196bb8fab54f7391971afc53f69e"} Nov 25 15:11:00 crc kubenswrapper[4806]: I1125 15:11:00.672690 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" event={"ID":"8fe87500-5164-48de-a495-f6d74b05b7f9","Type":"ContainerStarted","Data":"ada592e4c2506aff56f8a5b7ebaf6e416e1835db21b9704c36fc651546129603"} Nov 25 15:11:00 crc kubenswrapper[4806]: I1125 15:11:00.673614 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:11:00 crc kubenswrapper[4806]: I1125 15:11:00.707633 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" podStartSLOduration=1.860488597 podStartE2EDuration="6.707608794s" podCreationTimestamp="2025-11-25 15:10:54 +0000 UTC" firstStartedPulling="2025-11-25 15:10:54.749602122 +0000 UTC m=+1087.401744533" lastFinishedPulling="2025-11-25 15:10:59.596722319 +0000 UTC m=+1092.248864730" observedRunningTime="2025-11-25 15:11:00.705287329 +0000 UTC m=+1093.357429740" watchObservedRunningTime="2025-11-25 15:11:00.707608794 +0000 UTC m=+1093.359751205" Nov 25 15:11:04 crc kubenswrapper[4806]: I1125 15:11:04.427810 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.586154 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.588710 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.590621 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.591955 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.597819 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-74svh" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.597904 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-44twg" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.611796 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.613534 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.617884 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.618374 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rzc8k" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.645173 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.706587 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.713642 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.715446 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.715894 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvkv\" (UniqueName: \"kubernetes.io/projected/537dc134-0732-4dfc-b0be-9c16d3d191be-kube-api-access-dbvkv\") pod \"barbican-operator-controller-manager-86dc4d89c8-qk9m2\" (UID: \"537dc134-0732-4dfc-b0be-9c16d3d191be\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.715961 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwbmh\" (UniqueName: \"kubernetes.io/projected/de253966-f7ff-485f-8108-b8ee0fd795bf-kube-api-access-vwbmh\") pod \"designate-operator-controller-manager-7d695c9b56-wfsxk\" (UID: \"de253966-f7ff-485f-8108-b8ee0fd795bf\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.715994 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdwb\" (UniqueName: \"kubernetes.io/projected/40a580de-1093-4adc-a98c-e18202bee9e3-kube-api-access-dtdwb\") pod \"cinder-operator-controller-manager-79856dc55c-w6686\" (UID: \"40a580de-1093-4adc-a98c-e18202bee9e3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.722456 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qlfgw" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.726597 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.728371 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.737155 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4tj5m" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.746717 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.750017 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.757849 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.759666 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.767882 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vf5g4" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.771715 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.780405 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.782472 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.787961 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.788736 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5p6jp" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.810813 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.817904 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd72d\" (UniqueName: \"kubernetes.io/projected/fbf78fa8-8b88-454e-a7dc-0e75f463bc45-kube-api-access-jd72d\") pod \"glance-operator-controller-manager-68b95954c9-r8dnj\" (UID: \"fbf78fa8-8b88-454e-a7dc-0e75f463bc45\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.817992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvkv\" (UniqueName: \"kubernetes.io/projected/537dc134-0732-4dfc-b0be-9c16d3d191be-kube-api-access-dbvkv\") pod \"barbican-operator-controller-manager-86dc4d89c8-qk9m2\" (UID: \"537dc134-0732-4dfc-b0be-9c16d3d191be\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.818027 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwbmh\" (UniqueName: \"kubernetes.io/projected/de253966-f7ff-485f-8108-b8ee0fd795bf-kube-api-access-vwbmh\") pod \"designate-operator-controller-manager-7d695c9b56-wfsxk\" (UID: \"de253966-f7ff-485f-8108-b8ee0fd795bf\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.818058 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtdwb\" (UniqueName: \"kubernetes.io/projected/40a580de-1093-4adc-a98c-e18202bee9e3-kube-api-access-dtdwb\") pod \"cinder-operator-controller-manager-79856dc55c-w6686\" (UID: \"40a580de-1093-4adc-a98c-e18202bee9e3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.839732 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.841425 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.847215 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ppbgp" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.853634 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvkv\" (UniqueName: \"kubernetes.io/projected/537dc134-0732-4dfc-b0be-9c16d3d191be-kube-api-access-dbvkv\") pod \"barbican-operator-controller-manager-86dc4d89c8-qk9m2\" (UID: \"537dc134-0732-4dfc-b0be-9c16d3d191be\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.861899 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.862786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtdwb\" (UniqueName: \"kubernetes.io/projected/40a580de-1093-4adc-a98c-e18202bee9e3-kube-api-access-dtdwb\") pod \"cinder-operator-controller-manager-79856dc55c-w6686\" (UID: \"40a580de-1093-4adc-a98c-e18202bee9e3\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.866030 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.866435 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.893781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwbmh\" (UniqueName: \"kubernetes.io/projected/de253966-f7ff-485f-8108-b8ee0fd795bf-kube-api-access-vwbmh\") pod \"designate-operator-controller-manager-7d695c9b56-wfsxk\" (UID: \"de253966-f7ff-485f-8108-b8ee0fd795bf\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.922887 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dcx9r" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932292 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/461ceb26-b86c-4bb8-9550-131351dfa3e5-kube-api-access-sjjwc\") pod \"horizon-operator-controller-manager-68c9694994-h9qg8\" (UID: \"461ceb26-b86c-4bb8-9550-131351dfa3e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932350 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72p9c\" (UniqueName: \"kubernetes.io/projected/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-kube-api-access-72p9c\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932396 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd72d\" (UniqueName: \"kubernetes.io/projected/fbf78fa8-8b88-454e-a7dc-0e75f463bc45-kube-api-access-jd72d\") pod \"glance-operator-controller-manager-68b95954c9-r8dnj\" (UID: \"fbf78fa8-8b88-454e-a7dc-0e75f463bc45\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932443 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49rcg\" (UniqueName: \"kubernetes.io/projected/8294cfe0-6c14-49bc-bd5b-d614a68893ce-kube-api-access-49rcg\") pod \"heat-operator-controller-manager-774b86978c-jcrbm\" (UID: \"8294cfe0-6c14-49bc-bd5b-d614a68893ce\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.932498 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktdq9\" (UniqueName: \"kubernetes.io/projected/61457634-dc4d-4ad9-9bdc-c95aae5df022-kube-api-access-ktdq9\") pod \"keystone-operator-controller-manager-748dc6576f-w5r5m\" (UID: \"61457634-dc4d-4ad9-9bdc-c95aae5df022\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.936872 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.944424 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.962394 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.974887 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m"] Nov 25 15:11:37 crc kubenswrapper[4806]: I1125 15:11:37.993452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd72d\" (UniqueName: \"kubernetes.io/projected/fbf78fa8-8b88-454e-a7dc-0e75f463bc45-kube-api-access-jd72d\") pod \"glance-operator-controller-manager-68b95954c9-r8dnj\" (UID: \"fbf78fa8-8b88-454e-a7dc-0e75f463bc45\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.060620 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/461ceb26-b86c-4bb8-9550-131351dfa3e5-kube-api-access-sjjwc\") pod \"horizon-operator-controller-manager-68c9694994-h9qg8\" (UID: \"461ceb26-b86c-4bb8-9550-131351dfa3e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.073584 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72p9c\" (UniqueName: \"kubernetes.io/projected/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-kube-api-access-72p9c\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.073858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49rcg\" (UniqueName: \"kubernetes.io/projected/8294cfe0-6c14-49bc-bd5b-d614a68893ce-kube-api-access-49rcg\") pod \"heat-operator-controller-manager-774b86978c-jcrbm\" (UID: \"8294cfe0-6c14-49bc-bd5b-d614a68893ce\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.074044 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktdq9\" (UniqueName: \"kubernetes.io/projected/61457634-dc4d-4ad9-9bdc-c95aae5df022-kube-api-access-ktdq9\") pod \"keystone-operator-controller-manager-748dc6576f-w5r5m\" (UID: \"61457634-dc4d-4ad9-9bdc-c95aae5df022\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.074346 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqf7\" (UniqueName: \"kubernetes.io/projected/ec8a3bcc-2127-44bc-8f89-db3ece24a9b9-kube-api-access-dtqf7\") pod \"ironic-operator-controller-manager-5bfcdc958c-q6z52\" (UID: \"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.074457 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.074729 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.074896 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert podName:e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329 nodeName:}" failed. No retries permitted until 2025-11-25 15:11:38.574865069 +0000 UTC m=+1131.227007480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert") pod "infra-operator-controller-manager-d5cc86f4b-xlzgr" (UID: "e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329") : secret "infra-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.079279 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.113937 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktdq9\" (UniqueName: \"kubernetes.io/projected/61457634-dc4d-4ad9-9bdc-c95aae5df022-kube-api-access-ktdq9\") pod \"keystone-operator-controller-manager-748dc6576f-w5r5m\" (UID: \"61457634-dc4d-4ad9-9bdc-c95aae5df022\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.117344 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjwc\" (UniqueName: \"kubernetes.io/projected/461ceb26-b86c-4bb8-9550-131351dfa3e5-kube-api-access-sjjwc\") pod \"horizon-operator-controller-manager-68c9694994-h9qg8\" (UID: \"461ceb26-b86c-4bb8-9550-131351dfa3e5\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.122548 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72p9c\" (UniqueName: \"kubernetes.io/projected/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-kube-api-access-72p9c\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.151233 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49rcg\" (UniqueName: \"kubernetes.io/projected/8294cfe0-6c14-49bc-bd5b-d614a68893ce-kube-api-access-49rcg\") pod \"heat-operator-controller-manager-774b86978c-jcrbm\" (UID: \"8294cfe0-6c14-49bc-bd5b-d614a68893ce\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.178412 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtqf7\" (UniqueName: \"kubernetes.io/projected/ec8a3bcc-2127-44bc-8f89-db3ece24a9b9-kube-api-access-dtqf7\") pod \"ironic-operator-controller-manager-5bfcdc958c-q6z52\" (UID: \"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.214130 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtqf7\" (UniqueName: \"kubernetes.io/projected/ec8a3bcc-2127-44bc-8f89-db3ece24a9b9-kube-api-access-dtqf7\") pod \"ironic-operator-controller-manager-5bfcdc958c-q6z52\" (UID: \"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.216536 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.227885 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.228405 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.230343 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.230377 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.231614 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.232186 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.232269 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-q54pm" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.232286 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.233432 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.233495 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.235530 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-797w2" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.235682 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g2qnn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.235767 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-th9t9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.240768 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.246759 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.247230 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.248466 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.250384 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t9sgb" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.258790 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.263498 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.272391 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.277875 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.279299 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.280009 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4vwj\" (UniqueName: \"kubernetes.io/projected/63efe3dc-03df-4494-9661-9a23a89c0974-kube-api-access-x4vwj\") pod \"nova-operator-controller-manager-79556f57fc-wfhhn\" (UID: \"63efe3dc-03df-4494-9661-9a23a89c0974\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.280056 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f8d4\" (UniqueName: \"kubernetes.io/projected/c1159ae9-b734-4012-b746-35d037ee4817-kube-api-access-6f8d4\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-9thxp\" (UID: \"c1159ae9-b734-4012-b746-35d037ee4817\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.280079 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hfng\" (UniqueName: \"kubernetes.io/projected/2a080dd6-0904-4756-8b02-39d10465fea2-kube-api-access-6hfng\") pod \"octavia-operator-controller-manager-fd75fd47d-cqwgq\" (UID: \"2a080dd6-0904-4756-8b02-39d10465fea2\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.280146 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m686m\" (UniqueName: \"kubernetes.io/projected/9cc0ebc5-e3d4-4bae-8b33-032d950705ff-kube-api-access-m686m\") pod \"manila-operator-controller-manager-58bb8d67cc-bwwh4\" (UID: \"9cc0ebc5-e3d4-4bae-8b33-032d950705ff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.280172 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6zc\" (UniqueName: \"kubernetes.io/projected/d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7-kube-api-access-vw6zc\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c5xhr\" (UID: \"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.284054 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.284428 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qx59x" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.288069 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.294953 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.296643 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.301721 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.303659 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.308919 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.310781 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.319425 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.321835 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.333930 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4t4gc" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.334080 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hwn8l" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.334267 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h78l8" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.335662 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hflzm" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.343654 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wnx44"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.346052 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.354979 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-95vcl" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.367847 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.376333 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.376919 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.388345 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v9k5\" (UniqueName: \"kubernetes.io/projected/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-kube-api-access-2v9k5\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.388767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m686m\" (UniqueName: \"kubernetes.io/projected/9cc0ebc5-e3d4-4bae-8b33-032d950705ff-kube-api-access-m686m\") pod \"manila-operator-controller-manager-58bb8d67cc-bwwh4\" (UID: \"9cc0ebc5-e3d4-4bae-8b33-032d950705ff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.389393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw6zc\" (UniqueName: \"kubernetes.io/projected/d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7-kube-api-access-vw6zc\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c5xhr\" (UID: \"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417465 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7k8x\" (UniqueName: \"kubernetes.io/projected/4877ab9d-8cd3-4270-915f-c73167e93b49-kube-api-access-z7k8x\") pod \"test-operator-controller-manager-5cb74df96-wnx44\" (UID: \"4877ab9d-8cd3-4270-915f-c73167e93b49\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417542 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2x76\" (UniqueName: \"kubernetes.io/projected/24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b-kube-api-access-s2x76\") pod \"placement-operator-controller-manager-5db546f9d9-fxzwv\" (UID: \"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417670 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhrjm\" (UniqueName: \"kubernetes.io/projected/9dc1bbe2-49c1-4601-9acf-b1887426fdd0-kube-api-access-rhrjm\") pod \"ovn-operator-controller-manager-66cf5c67ff-tzsbk\" (UID: \"9dc1bbe2-49c1-4601-9acf-b1887426fdd0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjz6q\" (UniqueName: \"kubernetes.io/projected/dbedcc0b-12de-4497-a9f3-a9df6c88a74f-kube-api-access-mjz6q\") pod \"telemetry-operator-controller-manager-687f46fc78-xdmx6\" (UID: \"dbedcc0b-12de-4497-a9f3-a9df6c88a74f\") " pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417857 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4vwj\" (UniqueName: \"kubernetes.io/projected/63efe3dc-03df-4494-9661-9a23a89c0974-kube-api-access-x4vwj\") pod \"nova-operator-controller-manager-79556f57fc-wfhhn\" (UID: \"63efe3dc-03df-4494-9661-9a23a89c0974\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417924 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f8d4\" (UniqueName: \"kubernetes.io/projected/c1159ae9-b734-4012-b746-35d037ee4817-kube-api-access-6f8d4\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-9thxp\" (UID: \"c1159ae9-b734-4012-b746-35d037ee4817\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.417966 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hfng\" (UniqueName: \"kubernetes.io/projected/2a080dd6-0904-4756-8b02-39d10465fea2-kube-api-access-6hfng\") pod \"octavia-operator-controller-manager-fd75fd47d-cqwgq\" (UID: \"2a080dd6-0904-4756-8b02-39d10465fea2\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.418036 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.390432 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.399156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.420654 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.420675 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b7g79"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.422685 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.425047 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w767w\" (UniqueName: \"kubernetes.io/projected/1df7970b-bed8-4e27-b04b-66e513683875-kube-api-access-w767w\") pod \"swift-operator-controller-manager-6fdc4fcf86-pxx5w\" (UID: \"1df7970b-bed8-4e27-b04b-66e513683875\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.433488 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wnx44"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.434829 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6xrjb" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.438145 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b7g79"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.451484 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw6zc\" (UniqueName: \"kubernetes.io/projected/d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7-kube-api-access-vw6zc\") pod \"neutron-operator-controller-manager-7c57c8bbc4-c5xhr\" (UID: \"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.452587 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m686m\" (UniqueName: \"kubernetes.io/projected/9cc0ebc5-e3d4-4bae-8b33-032d950705ff-kube-api-access-m686m\") pod \"manila-operator-controller-manager-58bb8d67cc-bwwh4\" (UID: \"9cc0ebc5-e3d4-4bae-8b33-032d950705ff\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.454943 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4vwj\" (UniqueName: \"kubernetes.io/projected/63efe3dc-03df-4494-9661-9a23a89c0974-kube-api-access-x4vwj\") pod \"nova-operator-controller-manager-79556f57fc-wfhhn\" (UID: \"63efe3dc-03df-4494-9661-9a23a89c0974\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.460492 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hfng\" (UniqueName: \"kubernetes.io/projected/2a080dd6-0904-4756-8b02-39d10465fea2-kube-api-access-6hfng\") pod \"octavia-operator-controller-manager-fd75fd47d-cqwgq\" (UID: \"2a080dd6-0904-4756-8b02-39d10465fea2\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.464481 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f8d4\" (UniqueName: \"kubernetes.io/projected/c1159ae9-b734-4012-b746-35d037ee4817-kube-api-access-6f8d4\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-9thxp\" (UID: \"c1159ae9-b734-4012-b746-35d037ee4817\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.482248 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.531123 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v9k5\" (UniqueName: \"kubernetes.io/projected/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-kube-api-access-2v9k5\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7jr\" (UniqueName: \"kubernetes.io/projected/023302d1-a345-4f55-9ac1-4a2b674e36aa-kube-api-access-sh7jr\") pod \"watcher-operator-controller-manager-864885998-b7g79\" (UID: \"023302d1-a345-4f55-9ac1-4a2b674e36aa\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532331 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7k8x\" (UniqueName: \"kubernetes.io/projected/4877ab9d-8cd3-4270-915f-c73167e93b49-kube-api-access-z7k8x\") pod \"test-operator-controller-manager-5cb74df96-wnx44\" (UID: \"4877ab9d-8cd3-4270-915f-c73167e93b49\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532368 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2x76\" (UniqueName: \"kubernetes.io/projected/24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b-kube-api-access-s2x76\") pod \"placement-operator-controller-manager-5db546f9d9-fxzwv\" (UID: \"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532437 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhrjm\" (UniqueName: \"kubernetes.io/projected/9dc1bbe2-49c1-4601-9acf-b1887426fdd0-kube-api-access-rhrjm\") pod \"ovn-operator-controller-manager-66cf5c67ff-tzsbk\" (UID: \"9dc1bbe2-49c1-4601-9acf-b1887426fdd0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532478 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjz6q\" (UniqueName: \"kubernetes.io/projected/dbedcc0b-12de-4497-a9f3-a9df6c88a74f-kube-api-access-mjz6q\") pod \"telemetry-operator-controller-manager-687f46fc78-xdmx6\" (UID: \"dbedcc0b-12de-4497-a9f3-a9df6c88a74f\") " pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532576 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.532634 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w767w\" (UniqueName: \"kubernetes.io/projected/1df7970b-bed8-4e27-b04b-66e513683875-kube-api-access-w767w\") pod \"swift-operator-controller-manager-6fdc4fcf86-pxx5w\" (UID: \"1df7970b-bed8-4e27-b04b-66e513683875\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.534197 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.534370 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert podName:b3220f94-14c9-4820-9d1b-6b4bb1b635fd nodeName:}" failed. No retries permitted until 2025-11-25 15:11:39.034244574 +0000 UTC m=+1131.686386985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" (UID: "b3220f94-14c9-4820-9d1b-6b4bb1b635fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.546516 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.563941 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.564178 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.564494 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v9k5\" (UniqueName: \"kubernetes.io/projected/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-kube-api-access-2v9k5\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.564724 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-n99d9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.569690 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhrjm\" (UniqueName: \"kubernetes.io/projected/9dc1bbe2-49c1-4601-9acf-b1887426fdd0-kube-api-access-rhrjm\") pod \"ovn-operator-controller-manager-66cf5c67ff-tzsbk\" (UID: \"9dc1bbe2-49c1-4601-9acf-b1887426fdd0\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.570767 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.571660 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w767w\" (UniqueName: \"kubernetes.io/projected/1df7970b-bed8-4e27-b04b-66e513683875-kube-api-access-w767w\") pod \"swift-operator-controller-manager-6fdc4fcf86-pxx5w\" (UID: \"1df7970b-bed8-4e27-b04b-66e513683875\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.573388 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.575183 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2x76\" (UniqueName: \"kubernetes.io/projected/24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b-kube-api-access-s2x76\") pod \"placement-operator-controller-manager-5db546f9d9-fxzwv\" (UID: \"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.589652 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7k8x\" (UniqueName: \"kubernetes.io/projected/4877ab9d-8cd3-4270-915f-c73167e93b49-kube-api-access-z7k8x\") pod \"test-operator-controller-manager-5cb74df96-wnx44\" (UID: \"4877ab9d-8cd3-4270-915f-c73167e93b49\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.594913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjz6q\" (UniqueName: \"kubernetes.io/projected/dbedcc0b-12de-4497-a9f3-a9df6c88a74f-kube-api-access-mjz6q\") pod \"telemetry-operator-controller-manager-687f46fc78-xdmx6\" (UID: \"dbedcc0b-12de-4497-a9f3-a9df6c88a74f\") " pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.615081 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.634899 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.634978 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7jr\" (UniqueName: \"kubernetes.io/projected/023302d1-a345-4f55-9ac1-4a2b674e36aa-kube-api-access-sh7jr\") pod \"watcher-operator-controller-manager-864885998-b7g79\" (UID: \"023302d1-a345-4f55-9ac1-4a2b674e36aa\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.638687 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsrnh\" (UniqueName: \"kubernetes.io/projected/b97ff802-8b8f-47d4-bff1-7d6876f780ff-kube-api-access-rsrnh\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.638749 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.639096 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.635540 4806 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.639276 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert podName:e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329 nodeName:}" failed. No retries permitted until 2025-11-25 15:11:39.639253719 +0000 UTC m=+1132.291396130 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert") pod "infra-operator-controller-manager-d5cc86f4b-xlzgr" (UID: "e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329") : secret "infra-operator-webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.639790 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.653743 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.660903 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7jr\" (UniqueName: \"kubernetes.io/projected/023302d1-a345-4f55-9ac1-4a2b674e36aa-kube-api-access-sh7jr\") pod \"watcher-operator-controller-manager-864885998-b7g79\" (UID: \"023302d1-a345-4f55-9ac1-4a2b674e36aa\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.660989 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.662933 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.667146 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.678365 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mjvjq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.678489 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9"] Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.691450 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.699968 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.742781 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncpl\" (UniqueName: \"kubernetes.io/projected/fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30-kube-api-access-bncpl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2snr9\" (UID: \"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.742894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.743194 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsrnh\" (UniqueName: \"kubernetes.io/projected/b97ff802-8b8f-47d4-bff1-7d6876f780ff-kube-api-access-rsrnh\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.743694 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.765165 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.765265 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:39.265236503 +0000 UTC m=+1131.917378914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "webhook-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.765842 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.766289 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:11:38 crc kubenswrapper[4806]: E1125 15:11:38.766519 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:39.266463828 +0000 UTC m=+1131.918606429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "metrics-server-cert" not found Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.775333 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.813235 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.830422 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsrnh\" (UniqueName: \"kubernetes.io/projected/b97ff802-8b8f-47d4-bff1-7d6876f780ff-kube-api-access-rsrnh\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.847276 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncpl\" (UniqueName: \"kubernetes.io/projected/fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30-kube-api-access-bncpl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2snr9\" (UID: \"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.858243 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.896221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncpl\" (UniqueName: \"kubernetes.io/projected/fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30-kube-api-access-bncpl\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2snr9\" (UID: \"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" Nov 25 15:11:38 crc kubenswrapper[4806]: I1125 15:11:38.972492 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2"] Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.015476 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686"] Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.053305 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.053792 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.053909 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert podName:b3220f94-14c9-4820-9d1b-6b4bb1b635fd nodeName:}" failed. No retries permitted until 2025-11-25 15:11:40.053880635 +0000 UTC m=+1132.706023056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" (UID: "b3220f94-14c9-4820-9d1b-6b4bb1b635fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: W1125 15:11:39.060704 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod537dc134_0732_4dfc_b0be_9c16d3d191be.slice/crio-be56b95ce8a3087aa1bcfe6be628fa85ac95ad7e37ad0fbfae3d4f7075d4a0f2 WatchSource:0}: Error finding container be56b95ce8a3087aa1bcfe6be628fa85ac95ad7e37ad0fbfae3d4f7075d4a0f2: Status 404 returned error can't find the container with id be56b95ce8a3087aa1bcfe6be628fa85ac95ad7e37ad0fbfae3d4f7075d4a0f2 Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.125681 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.208257 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk"] Nov 25 15:11:39 crc kubenswrapper[4806]: W1125 15:11:39.243218 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde253966_f7ff_485f_8108_b8ee0fd795bf.slice/crio-878e5649204ca8dd424c1d949ddd6ed1f1281c252739dbf86baf68af5de2647a WatchSource:0}: Error finding container 878e5649204ca8dd424c1d949ddd6ed1f1281c252739dbf86baf68af5de2647a: Status 404 returned error can't find the container with id 878e5649204ca8dd424c1d949ddd6ed1f1281c252739dbf86baf68af5de2647a Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.367864 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.368472 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.368656 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.368731 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.368742 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:40.368721342 +0000 UTC m=+1133.020863753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "metrics-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: E1125 15:11:39.368795 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:40.368776803 +0000 UTC m=+1133.020919214 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "webhook-server-cert" not found Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.594555 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerStarted","Data":"ae614de7cfa0712da9711e291ebc56a8d38e7c582ccc5ab159d702fb7570c272"} Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.596485 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerStarted","Data":"be56b95ce8a3087aa1bcfe6be628fa85ac95ad7e37ad0fbfae3d4f7075d4a0f2"} Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.597567 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerStarted","Data":"878e5649204ca8dd424c1d949ddd6ed1f1281c252739dbf86baf68af5de2647a"} Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.636622 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm"] Nov 25 15:11:39 crc kubenswrapper[4806]: W1125 15:11:39.645969 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8294cfe0_6c14_49bc_bd5b_d614a68893ce.slice/crio-9b0c8e48a8c0b797005c5531504219ca92e7cec2dc018b34c4afccf568c0c29b WatchSource:0}: Error finding container 9b0c8e48a8c0b797005c5531504219ca92e7cec2dc018b34c4afccf568c0c29b: Status 404 returned error can't find the container with id 9b0c8e48a8c0b797005c5531504219ca92e7cec2dc018b34c4afccf568c0c29b Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.650272 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj"] Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.675261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.677153 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m"] Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.682877 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-xlzgr\" (UID: \"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.700303 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52"] Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.707810 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8"] Nov 25 15:11:39 crc kubenswrapper[4806]: W1125 15:11:39.708168 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61457634_dc4d_4ad9_9bdc_c95aae5df022.slice/crio-84c5e043adff36adde8629370ab33d70d509453d761032fb37ce0f913d48de24 WatchSource:0}: Error finding container 84c5e043adff36adde8629370ab33d70d509453d761032fb37ce0f913d48de24: Status 404 returned error can't find the container with id 84c5e043adff36adde8629370ab33d70d509453d761032fb37ce0f913d48de24 Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.920441 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.970991 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-wnx44"] Nov 25 15:11:39 crc kubenswrapper[4806]: W1125 15:11:39.982795 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f4f05a_5ae5_4f49_87f2_a1e642ee0ac7.slice/crio-81a7e7e73f80016b60fbf958db40bfaf2db17b8b1ffae0f0871f03fd2404f9f4 WatchSource:0}: Error finding container 81a7e7e73f80016b60fbf958db40bfaf2db17b8b1ffae0f0871f03fd2404f9f4: Status 404 returned error can't find the container with id 81a7e7e73f80016b60fbf958db40bfaf2db17b8b1ffae0f0871f03fd2404f9f4 Nov 25 15:11:39 crc kubenswrapper[4806]: I1125 15:11:39.985294 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.000772 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9"] Nov 25 15:11:40 crc kubenswrapper[4806]: W1125 15:11:40.016724 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbedcc0b_12de_4497_a9f3_a9df6c88a74f.slice/crio-01300cfb1bb837b0c8958b32e6613d953eabbf0d01c2a83fb457185053d2d137 WatchSource:0}: Error finding container 01300cfb1bb837b0c8958b32e6613d953eabbf0d01c2a83fb457185053d2d137: Status 404 returned error can't find the container with id 01300cfb1bb837b0c8958b32e6613d953eabbf0d01c2a83fb457185053d2d137 Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.016776 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6"] Nov 25 15:11:40 crc kubenswrapper[4806]: W1125 15:11:40.027060 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24cfe3fd_9b1a_4b9a_9b99_1b089fa2124b.slice/crio-7109c9514bc2e2de8ef45c7873decc47b373f3fb15d8954aec7149e9b5c57a47 WatchSource:0}: Error finding container 7109c9514bc2e2de8ef45c7873decc47b373f3fb15d8954aec7149e9b5c57a47: Status 404 returned error can't find the container with id 7109c9514bc2e2de8ef45c7873decc47b373f3fb15d8954aec7149e9b5c57a47 Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.044601 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.062748 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.073172 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.084535 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.084812 4806 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.084879 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert podName:b3220f94-14c9-4820-9d1b-6b4bb1b635fd nodeName:}" failed. No retries permitted until 2025-11-25 15:11:42.084861611 +0000 UTC m=+1134.737004022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" (UID: "b3220f94-14c9-4820-9d1b-6b4bb1b635fd") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.088224 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2x76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.091837 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w767w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.096725 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sh7jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.096997 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w767w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.097237 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhrjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.098629 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.098987 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.099180 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2x76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.100253 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.104259 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6f8d4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.104387 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sh7jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.104459 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhrjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.104563 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4vwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.105384 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.105465 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.105548 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.107115 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4vwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.108572 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.108848 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.108873 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.111694 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.130007 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.139174 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-b7g79"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.394054 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.395041 4806 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.395617 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:42.395587874 +0000 UTC m=+1135.047730285 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "webhook-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.396024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.396330 4806 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.396375 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs podName:b97ff802-8b8f-47d4-bff1-7d6876f780ff nodeName:}" failed. No retries permitted until 2025-11-25 15:11:42.396364716 +0000 UTC m=+1135.048507127 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs") pod "openstack-operator-controller-manager-7c468db9ff-2r8gr" (UID: "b97ff802-8b8f-47d4-bff1-7d6876f780ff") : secret "metrics-server-cert" not found Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.607894 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr"] Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.618201 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerStarted","Data":"b119eb0603cb78495bcb9ef8a2232a625a262198e756d03c654e7eae79bb4c41"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.624554 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerStarted","Data":"f754b41ae24e2e8f99b97564008adcf10b264717d2f058ae451b29e920809e28"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.628207 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerStarted","Data":"2de021c32ec1d6ff76c45154a413b5322920b46890f0bc5b8cfde35997175426"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.641672 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerStarted","Data":"339d53dbae8219c09459effea55dcde75bd98b0fce0186d074577d8067556e11"} Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.641820 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.643916 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.646328 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerStarted","Data":"95f0ca3e350b9a2463f46859f05d8df0fdff6f0d7044a48b9860902ba7dcba96"} Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.660562 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.664347 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerStarted","Data":"6d17ecf64dbba46a3800994d8824d7e36a1191bd53a5a78f3a93c0b881d5d97d"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.694950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerStarted","Data":"84c5e043adff36adde8629370ab33d70d509453d761032fb37ce0f913d48de24"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.729649 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerStarted","Data":"f87ce4aab7a2108b498b79e28a72b1c6ef999c84fcdca85fae686d83a934a35e"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.732991 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerStarted","Data":"0d76d8c6251481d3228cec135ec3b775f643174b50b82a669f169e0e32b2cc29"} Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.744774 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.745749 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerStarted","Data":"9b0c8e48a8c0b797005c5531504219ca92e7cec2dc018b34c4afccf568c0c29b"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.748257 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" event={"ID":"4877ab9d-8cd3-4270-915f-c73167e93b49","Type":"ContainerStarted","Data":"72309d1472f0388504ad498eecd084e82d6b625abc7e2443ad62497c6188afaa"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.769965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerStarted","Data":"ecf97a710b7cff572d0705ecd6519c0624da7f00a2bb138d287bb04c4adbbd2e"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.776841 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerStarted","Data":"1053e3909b7b1c9eeb96efc3a67f8f05ba94e67a3f784d84642bc08bb89cc8fd"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.778724 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerStarted","Data":"81a7e7e73f80016b60fbf958db40bfaf2db17b8b1ffae0f0871f03fd2404f9f4"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.781976 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerStarted","Data":"01300cfb1bb837b0c8958b32e6613d953eabbf0d01c2a83fb457185053d2d137"} Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.792902 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerStarted","Data":"7109c9514bc2e2de8ef45c7873decc47b373f3fb15d8954aec7149e9b5c57a47"} Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.794117 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:11:40 crc kubenswrapper[4806]: I1125 15:11:40.803020 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerStarted","Data":"d657a905274fe19c8a468c5f6bd0488d1427b36252e4f9ba56645976c5d04eb4"} Nov 25 15:11:40 crc kubenswrapper[4806]: E1125 15:11:40.803236 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:11:41 crc kubenswrapper[4806]: I1125 15:11:41.844669 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"85ae538b09241dac686db10ce2aa4e435357bb2f89ca2041c5c7968357dabf8f"} Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.858518 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.859267 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.864117 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.864135 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:c053e34316044f14929e16e4f0d97f9f1b24cb68b5e22b925ca74c66aaaed0a7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.864191 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:7b90521b9e9cb4eb43c2f1c3bf85dbd068d684315f4f705b07708dd078df9d04\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:11:41 crc kubenswrapper[4806]: E1125 15:11:41.865907 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:5d49d4594c66eda7b151746cc6e1d3c67c0129b4503eeb043a64ae8ec2da6a1b\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.138592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.156915 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b3220f94-14c9-4820-9d1b-6b4bb1b635fd-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g\" (UID: \"b3220f94-14c9-4820-9d1b-6b4bb1b635fd\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.225998 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.445664 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.445791 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.461426 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-webhook-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.468301 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b97ff802-8b8f-47d4-bff1-7d6876f780ff-metrics-certs\") pod \"openstack-operator-controller-manager-7c468db9ff-2r8gr\" (UID: \"b97ff802-8b8f-47d4-bff1-7d6876f780ff\") " pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:42 crc kubenswrapper[4806]: I1125 15:11:42.702997 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.306814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g"] Nov 25 15:11:52 crc kubenswrapper[4806]: W1125 15:11:52.362495 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3220f94_14c9_4820_9d1b_6b4bb1b635fd.slice/crio-980c8f003ffb206249d374a7db0fe7e6dee262996647d52c1b1a1ccce1b6c4dd WatchSource:0}: Error finding container 980c8f003ffb206249d374a7db0fe7e6dee262996647d52c1b1a1ccce1b6c4dd: Status 404 returned error can't find the container with id 980c8f003ffb206249d374a7db0fe7e6dee262996647d52c1b1a1ccce1b6c4dd Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.367194 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr"] Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.945533 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" event={"ID":"4877ab9d-8cd3-4270-915f-c73167e93b49","Type":"ContainerStarted","Data":"dab89d285e58e0bf73be55a651247515bc08d486e7796f08dee40cee0ded5cee"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.954773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" event={"ID":"b3220f94-14c9-4820-9d1b-6b4bb1b635fd","Type":"ContainerStarted","Data":"980c8f003ffb206249d374a7db0fe7e6dee262996647d52c1b1a1ccce1b6c4dd"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.957654 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerStarted","Data":"f5778c542722a20ee02a9a3f06a4bdf25708e7aa27fe27fa79ed522fc527c7a0"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.965944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerStarted","Data":"b9b740009a808f3b280568bdbca5eb2af6f37caba0f58b2b4c0d1dcc8d4ad842"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.967656 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerStarted","Data":"ded3ac84372f8f6035c5cf5f5b2de62928161508f626cc1b627df559cbd767ea"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.969906 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerStarted","Data":"ba810bf7af63f00f329f5f77fb29ea46e9f889f5b53d88cd25b572b713949905"} Nov 25 15:11:52 crc kubenswrapper[4806]: I1125 15:11:52.976110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerStarted","Data":"b9cddda6fd0d81afc149bb233d62ea5b2c229d34857d532b34868b2d96f7023e"} Nov 25 15:11:53 crc kubenswrapper[4806]: I1125 15:11:53.003548 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" podStartSLOduration=3.013026796 podStartE2EDuration="15.003515187s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.976562664 +0000 UTC m=+1132.628705075" lastFinishedPulling="2025-11-25 15:11:51.967051055 +0000 UTC m=+1144.619193466" observedRunningTime="2025-11-25 15:11:52.99577849 +0000 UTC m=+1145.647920911" watchObservedRunningTime="2025-11-25 15:11:53.003515187 +0000 UTC m=+1145.655657598" Nov 25 15:11:53 crc kubenswrapper[4806]: E1125 15:11:53.398759 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ktdq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:11:53 crc kubenswrapper[4806]: E1125 15:11:53.402494 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.035772 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerStarted","Data":"b47c894be1fc9d3ddec4b41e3d12acded7c87aaa42dc9a90df57d7e75bcd8512"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.064829 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerStarted","Data":"c0a7d9f15f2c0d8cf95a32752e649092e170d008a8a85cd29a613ccbf062a7bb"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.065747 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:54 crc kubenswrapper[4806]: E1125 15:11:54.072785 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.119267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"9c5f142297a42528951601947da3c70b72b4321eda5c6d136413e0d96bc995dd"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.157860 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerStarted","Data":"d2f0e7fe2e7c7ebb4ac49b1098b7e31338d32690364de37dec7e8ca49dce5f1a"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.190528 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerStarted","Data":"2dff34746d4c23c4f1049058de88d626a301c378228c5e62171235a1b3185e7b"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.191931 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.214227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerStarted","Data":"3e97a83745b3a29044d7a8dacb5fc07334fac77cf8d7c8954308be6a7b1fe747"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.242306 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podStartSLOduration=16.24228159 podStartE2EDuration="16.24228159s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:11:54.23516685 +0000 UTC m=+1146.887309261" watchObservedRunningTime="2025-11-25 15:11:54.24228159 +0000 UTC m=+1146.894424001" Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.246768 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerStarted","Data":"b9b9f8ea6c3f6b55997a111211be457e64faa838a61c15d6c4ebe42531affe52"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.313588 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerStarted","Data":"4d484cc4f798791aa2650cd62a25f5acbdfb1760eeb9df81db216a97c7e082c0"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.321931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerStarted","Data":"c174a8987bd879647aacf86f977e75ea5653757984a42f7c296ee0655e02a9ab"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.325619 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerStarted","Data":"5ba1c65c7e44365e690f6d1f20930029db4c687bbe3e6b21836d6c0a97ec5a92"} Nov 25 15:11:54 crc kubenswrapper[4806]: I1125 15:11:54.327858 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerStarted","Data":"cf4ab4cc9e2934f3786dce00d83d4d816e1a9646282fcddfdde67f22ced89a1f"} Nov 25 15:11:55 crc kubenswrapper[4806]: E1125 15:11:55.337297 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:11:58 crc kubenswrapper[4806]: I1125 15:11:58.268716 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:11:58 crc kubenswrapper[4806]: E1125 15:11:58.271480 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:12:02 crc kubenswrapper[4806]: I1125 15:12:02.710483 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:12:06 crc kubenswrapper[4806]: I1125 15:12:06.440976 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" event={"ID":"b3220f94-14c9-4820-9d1b-6b4bb1b635fd","Type":"ContainerStarted","Data":"c55e92f5c825e60f50c87d3013e9b535cfd09ba37bdadcf7a992285d5daf3ed2"} Nov 25 15:12:06 crc kubenswrapper[4806]: I1125 15:12:06.458734 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerStarted","Data":"a4bb4f5d49c85bcca3bed07050f729a33e473b1b22813853c3526ae21689d99a"} Nov 25 15:12:06 crc kubenswrapper[4806]: I1125 15:12:06.462831 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerStarted","Data":"0ace9b21c89ba330bb9b430c1812b2367647cf057f87ea846daa097f3b315141"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.510303 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerStarted","Data":"66d3100277fece3ebaec51e57459785e80c46955b033a8efd0f93c711f299b50"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.513091 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"33a4204a5cc726df3bc6feeabc3996366600a90359ac11ee94546316239daf2f"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.515521 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.529051 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerStarted","Data":"3423789d53e7f138021796408b464e5dc99101bdfa255d1516b7d523c9f3b142"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.529114 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerStarted","Data":"38ec4960d787a71f306d7d17637485fcdfefaf2be71028e8887caff38ad73108"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.529994 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.530376 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.545510 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerStarted","Data":"21b151a54e623997fabcae83a2b6c3ca6aae4839751faceb00b754c145abc8cb"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.546769 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.554160 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podStartSLOduration=4.779722898 podStartE2EDuration="30.554124559s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.686613642 +0000 UTC m=+1133.338756063" lastFinishedPulling="2025-11-25 15:12:06.461015313 +0000 UTC m=+1159.113157724" observedRunningTime="2025-11-25 15:12:07.544690823 +0000 UTC m=+1160.196833224" watchObservedRunningTime="2025-11-25 15:12:07.554124559 +0000 UTC m=+1160.206266970" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.555576 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.566405 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerStarted","Data":"ec9e265614fc4188a5a86778d4b44c5e209c1c0a374af52000b862e6ff22e2e9"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.566476 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerStarted","Data":"2fad5bb305496b231a70b8f34d0d79f39b5134e4f7e732af86b2147108ea72d3"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.567512 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.590997 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerStarted","Data":"3576fe815f1f1a19859a614aa13553a373abc8c90dc1ee1e107d41b3266f53fd"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.593350 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.598849 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" event={"ID":"4877ab9d-8cd3-4270-915f-c73167e93b49","Type":"ContainerStarted","Data":"907ffab87322325f679cd09889b193c9f841945d0fa8d0c73d0fcd17ca9742c6"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.599952 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.599988 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.610711 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.617329 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerStarted","Data":"f97c29d1210c75c40e9ce09e532f5188b6410f6a6e36b753f69e2d98a41f2cb7"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.618849 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podStartSLOduration=4.262003807 podStartE2EDuration="29.618822559s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.097108756 +0000 UTC m=+1132.749251167" lastFinishedPulling="2025-11-25 15:12:05.453927508 +0000 UTC m=+1158.106069919" observedRunningTime="2025-11-25 15:12:07.583847475 +0000 UTC m=+1160.235989896" watchObservedRunningTime="2025-11-25 15:12:07.618822559 +0000 UTC m=+1160.270964970" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.623711 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.649062 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.658774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerStarted","Data":"8e4b99bed2646c05b47e79d2b2223c73c69d2094cbe89d88d09fdda2007fae4a"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.659847 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.672611 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" podStartSLOduration=3.447828139 podStartE2EDuration="29.672580132s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.018902595 +0000 UTC m=+1132.671045006" lastFinishedPulling="2025-11-25 15:12:06.243654588 +0000 UTC m=+1158.895796999" observedRunningTime="2025-11-25 15:12:07.659504294 +0000 UTC m=+1160.311646705" watchObservedRunningTime="2025-11-25 15:12:07.672580132 +0000 UTC m=+1160.324722553" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.693372 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.702878 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerStarted","Data":"3c52a4591f22069948b3682e9a0a4341f42acd785142b202fa1dfec940437fff"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.704407 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.713600 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.723054 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podStartSLOduration=4.357126873 podStartE2EDuration="29.72301698s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.088035751 +0000 UTC m=+1132.740178162" lastFinishedPulling="2025-11-25 15:12:05.453925858 +0000 UTC m=+1158.106068269" observedRunningTime="2025-11-25 15:12:07.715028535 +0000 UTC m=+1160.367170966" watchObservedRunningTime="2025-11-25 15:12:07.72301698 +0000 UTC m=+1160.375159391" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.726808 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerStarted","Data":"33023ae764b8f7732a41009fc99d022b373df8ebd2d7ebcc5b10a06d1d0c7754"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.751631 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" event={"ID":"b3220f94-14c9-4820-9d1b-6b4bb1b635fd","Type":"ContainerStarted","Data":"2009bbcc1232b12a6db5cd11a80085bda4880c5003bfcfc95244d42d564ec73f"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.752276 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.801209 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerStarted","Data":"1ec46ee04af545785896433de3f6f09e0e6a3fab9fe63d11e871a8e5683d5e81"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.804514 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.812574 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.814026 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" podStartSLOduration=3.871235426 podStartE2EDuration="30.81401043s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.647748023 +0000 UTC m=+1132.299890434" lastFinishedPulling="2025-11-25 15:12:06.590523027 +0000 UTC m=+1159.242665438" observedRunningTime="2025-11-25 15:12:07.752583062 +0000 UTC m=+1160.404725493" watchObservedRunningTime="2025-11-25 15:12:07.81401043 +0000 UTC m=+1160.466152841" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.814409 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" podStartSLOduration=3.253627138 podStartE2EDuration="30.814403151s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.169797625 +0000 UTC m=+1131.821940036" lastFinishedPulling="2025-11-25 15:12:06.730573638 +0000 UTC m=+1159.382716049" observedRunningTime="2025-11-25 15:12:07.80263192 +0000 UTC m=+1160.454774331" watchObservedRunningTime="2025-11-25 15:12:07.814403151 +0000 UTC m=+1160.466545562" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.845248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerStarted","Data":"84f833ca7d9c280be7fc5f5d62a8be29f9290a5f5afdc5721288730a27d0484a"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.846263 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.849288 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" podStartSLOduration=4.323067268 podStartE2EDuration="30.849257152s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.714437959 +0000 UTC m=+1132.366580370" lastFinishedPulling="2025-11-25 15:12:06.240627853 +0000 UTC m=+1158.892770254" observedRunningTime="2025-11-25 15:12:07.847987636 +0000 UTC m=+1160.500130047" watchObservedRunningTime="2025-11-25 15:12:07.849257152 +0000 UTC m=+1160.501399563" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.884713 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerStarted","Data":"156153a2478889358391f5b10a03b41ab5a6da218825fa2127a43d9462095fa1"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.887215 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.893203 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.901964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerStarted","Data":"40d748163670c3f8dcac611f1024d3d782586ab844e66ab6c9e1156c5f426ecd"} Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.902423 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.909984 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" podStartSLOduration=3.331844339 podStartE2EDuration="30.909937329s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.139529633 +0000 UTC m=+1131.791672034" lastFinishedPulling="2025-11-25 15:12:06.717622613 +0000 UTC m=+1159.369765024" observedRunningTime="2025-11-25 15:12:07.892628372 +0000 UTC m=+1160.544770783" watchObservedRunningTime="2025-11-25 15:12:07.909937329 +0000 UTC m=+1160.562079740" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.988693 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" podStartSLOduration=4.863877305 podStartE2EDuration="30.988672964s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.087539217 +0000 UTC m=+1132.739681628" lastFinishedPulling="2025-11-25 15:12:06.212334876 +0000 UTC m=+1158.864477287" observedRunningTime="2025-11-25 15:12:07.940970812 +0000 UTC m=+1160.593113223" watchObservedRunningTime="2025-11-25 15:12:07.988672964 +0000 UTC m=+1160.640815375" Nov 25 15:12:07 crc kubenswrapper[4806]: I1125 15:12:07.990120 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" podStartSLOduration=18.616963928 podStartE2EDuration="30.990115465s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:52.377260627 +0000 UTC m=+1145.029403038" lastFinishedPulling="2025-11-25 15:12:04.750412164 +0000 UTC m=+1157.402554575" observedRunningTime="2025-11-25 15:12:07.985869376 +0000 UTC m=+1160.638011787" watchObservedRunningTime="2025-11-25 15:12:07.990115465 +0000 UTC m=+1160.642257876" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.014980 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" podStartSLOduration=4.518027064 podStartE2EDuration="31.014945534s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.678455337 +0000 UTC m=+1132.330597748" lastFinishedPulling="2025-11-25 15:12:06.175373807 +0000 UTC m=+1158.827516218" observedRunningTime="2025-11-25 15:12:08.01197501 +0000 UTC m=+1160.664117421" watchObservedRunningTime="2025-11-25 15:12:08.014945534 +0000 UTC m=+1160.667087945" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.076565 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podStartSLOduration=5.699524746 podStartE2EDuration="31.076539827s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.098835034 +0000 UTC m=+1132.750977445" lastFinishedPulling="2025-11-25 15:12:05.475850115 +0000 UTC m=+1158.127992526" observedRunningTime="2025-11-25 15:12:08.07595772 +0000 UTC m=+1160.728100131" watchObservedRunningTime="2025-11-25 15:12:08.076539827 +0000 UTC m=+1160.728682238" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.123290 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podStartSLOduration=4.739356547 podStartE2EDuration="30.123259101s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.091749145 +0000 UTC m=+1132.743891556" lastFinishedPulling="2025-11-25 15:12:05.475651699 +0000 UTC m=+1158.127794110" observedRunningTime="2025-11-25 15:12:08.122426518 +0000 UTC m=+1160.774568929" watchObservedRunningTime="2025-11-25 15:12:08.123259101 +0000 UTC m=+1160.775401512" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.168623 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" podStartSLOduration=4.001836356 podStartE2EDuration="30.168593817s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.075909819 +0000 UTC m=+1132.728052230" lastFinishedPulling="2025-11-25 15:12:06.24266728 +0000 UTC m=+1158.894809691" observedRunningTime="2025-11-25 15:12:08.154004746 +0000 UTC m=+1160.806147177" watchObservedRunningTime="2025-11-25 15:12:08.168593817 +0000 UTC m=+1160.820736228" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.911892 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerStarted","Data":"820e3d6991ca753b36dab85fb7e5124b5cb8e9fd3c16adf1d20a7e529431afda"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.912297 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.914797 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.915967 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerStarted","Data":"2337e557d23120da9ab6da211ad47a43b63d90c3ab8d59bb9ef68ec46c93bdd2"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.916875 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.918935 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.919114 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerStarted","Data":"485ceaf15551cf06a012c616bbd20641aad56a3abe0c13c2dbe2e0111238ed98"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.919794 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.922056 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerStarted","Data":"02e690b232c087336eaf57d5e313de7b0bda399dcdc2d48041ebfa74ead427c6"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.922947 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.926209 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.926400 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerStarted","Data":"41402d1a340b5a4f2d405186aac0cede8c738fdb3cf01c55ea61799560092878"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.927372 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.929747 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.934189 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerStarted","Data":"4d59fc41ca49bfd58819f7523a18808f946c9912a496058ee8c2a5b18197eba8"} Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.934248 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:12:08 crc kubenswrapper[4806]: I1125 15:12:08.943500 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" podStartSLOduration=4.976923855 podStartE2EDuration="31.943480709s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.087986949 +0000 UTC m=+1132.740129360" lastFinishedPulling="2025-11-25 15:12:07.054543803 +0000 UTC m=+1159.706686214" observedRunningTime="2025-11-25 15:12:08.931195643 +0000 UTC m=+1161.583338064" watchObservedRunningTime="2025-11-25 15:12:08.943480709 +0000 UTC m=+1161.595623120" Nov 25 15:12:09 crc kubenswrapper[4806]: I1125 15:12:09.024556 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podStartSLOduration=6.742810271 podStartE2EDuration="32.024519649s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.104491384 +0000 UTC m=+1132.756633795" lastFinishedPulling="2025-11-25 15:12:05.386200762 +0000 UTC m=+1158.038343173" observedRunningTime="2025-11-25 15:12:09.022545543 +0000 UTC m=+1161.674687964" watchObservedRunningTime="2025-11-25 15:12:09.024519649 +0000 UTC m=+1161.676662060" Nov 25 15:12:09 crc kubenswrapper[4806]: I1125 15:12:09.030689 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" podStartSLOduration=4.666596744 podStartE2EDuration="32.030627651s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.987474621 +0000 UTC m=+1132.639617032" lastFinishedPulling="2025-11-25 15:12:07.351505528 +0000 UTC m=+1160.003647939" observedRunningTime="2025-11-25 15:12:08.981035285 +0000 UTC m=+1161.633177696" watchObservedRunningTime="2025-11-25 15:12:09.030627651 +0000 UTC m=+1161.682770062" Nov 25 15:12:09 crc kubenswrapper[4806]: I1125 15:12:09.051162 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" podStartSLOduration=4.179382186 podStartE2EDuration="32.051137968s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.247307266 +0000 UTC m=+1131.899449677" lastFinishedPulling="2025-11-25 15:12:07.119063048 +0000 UTC m=+1159.771205459" observedRunningTime="2025-11-25 15:12:09.048849954 +0000 UTC m=+1161.700992355" watchObservedRunningTime="2025-11-25 15:12:09.051137968 +0000 UTC m=+1161.703280389" Nov 25 15:12:09 crc kubenswrapper[4806]: I1125 15:12:09.083856 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" podStartSLOduration=4.357214019 podStartE2EDuration="32.083830288s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.719281425 +0000 UTC m=+1132.371423836" lastFinishedPulling="2025-11-25 15:12:07.445897684 +0000 UTC m=+1160.098040105" observedRunningTime="2025-11-25 15:12:09.078817607 +0000 UTC m=+1161.730960028" watchObservedRunningTime="2025-11-25 15:12:09.083830288 +0000 UTC m=+1161.735972699" Nov 25 15:12:09 crc kubenswrapper[4806]: I1125 15:12:09.108020 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podStartSLOduration=5.18478723 podStartE2EDuration="31.107995778s" podCreationTimestamp="2025-11-25 15:11:38 +0000 UTC" firstStartedPulling="2025-11-25 15:11:40.096441317 +0000 UTC m=+1132.748583738" lastFinishedPulling="2025-11-25 15:12:06.019649875 +0000 UTC m=+1158.671792286" observedRunningTime="2025-11-25 15:12:09.106934068 +0000 UTC m=+1161.759076479" watchObservedRunningTime="2025-11-25 15:12:09.107995778 +0000 UTC m=+1161.760138189" Nov 25 15:12:11 crc kubenswrapper[4806]: I1125 15:12:11.963325 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerStarted","Data":"50c01147ab1adf9063114146d3a1edfa1fa7a38ba6454050a3aadd8cd60e0632"} Nov 25 15:12:12 crc kubenswrapper[4806]: I1125 15:12:12.233335 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:12:12 crc kubenswrapper[4806]: I1125 15:12:12.278704 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podStartSLOduration=22.800461404 podStartE2EDuration="35.278674817s" podCreationTimestamp="2025-11-25 15:11:37 +0000 UTC" firstStartedPulling="2025-11-25 15:11:39.718441132 +0000 UTC m=+1132.370583543" lastFinishedPulling="2025-11-25 15:11:52.196654545 +0000 UTC m=+1144.848796956" observedRunningTime="2025-11-25 15:12:11.983346027 +0000 UTC m=+1164.635488448" watchObservedRunningTime="2025-11-25 15:12:12.278674817 +0000 UTC m=+1164.930817238" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.618954 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.659392 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.667546 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.704788 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.771580 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:12:18 crc kubenswrapper[4806]: I1125 15:12:18.861751 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.881281 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.892820 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.896947 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.897220 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.897388 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.897551 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fsfg7" Nov 25 15:12:34 crc kubenswrapper[4806]: I1125 15:12:34.920327 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.000978 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.002772 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.009131 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.079486 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.079703 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7z9j\" (UniqueName: \"kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.090181 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.181782 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.181865 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.181939 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7z9j\" (UniqueName: \"kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.181970 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scmk2\" (UniqueName: \"kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.182006 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.183090 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.218974 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7z9j\" (UniqueName: \"kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j\") pod \"dnsmasq-dns-675f4bcbfc-4hs22\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.219852 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.284408 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scmk2\" (UniqueName: \"kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.284488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.284545 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.286152 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.287857 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.309412 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scmk2\" (UniqueName: \"kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2\") pod \"dnsmasq-dns-78dd6ddcc-ppmxl\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.325477 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.740621 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:12:35 crc kubenswrapper[4806]: W1125 15:12:35.860827 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod994363da_e750_4d6d_9559_7eca7054bd4b.slice/crio-9b224459560cd53965cf193eef9261f4697ea0747e57ae21285099bbe57b5726 WatchSource:0}: Error finding container 9b224459560cd53965cf193eef9261f4697ea0747e57ae21285099bbe57b5726: Status 404 returned error can't find the container with id 9b224459560cd53965cf193eef9261f4697ea0747e57ae21285099bbe57b5726 Nov 25 15:12:35 crc kubenswrapper[4806]: I1125 15:12:35.868668 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:12:36 crc kubenswrapper[4806]: I1125 15:12:36.178281 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" event={"ID":"7d59b85b-a8d5-4451-aad3-6d53ba2798a4","Type":"ContainerStarted","Data":"f28fd12b18cdec3c494c70468d0719e20d8bb23379b39e5214e6f7e62db47242"} Nov 25 15:12:36 crc kubenswrapper[4806]: I1125 15:12:36.179589 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" event={"ID":"994363da-e750-4d6d-9559-7eca7054bd4b","Type":"ContainerStarted","Data":"9b224459560cd53965cf193eef9261f4697ea0747e57ae21285099bbe57b5726"} Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.613913 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.646863 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.648391 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.667672 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.839414 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.839556 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2vkn\" (UniqueName: \"kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.839674 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.941173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.941282 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2vkn\" (UniqueName: \"kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.941439 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.942706 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.942743 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.967156 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2vkn\" (UniqueName: \"kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn\") pod \"dnsmasq-dns-666b6646f7-njrj8\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:37 crc kubenswrapper[4806]: I1125 15:12:37.972244 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.066626 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.104512 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.105950 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.213406 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.261389 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jnt\" (UniqueName: \"kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.261483 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.261529 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.363649 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jnt\" (UniqueName: \"kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.363771 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.363821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.365007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.367281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.386232 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jnt\" (UniqueName: \"kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt\") pod \"dnsmasq-dns-57d769cc4f-mn6ms\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.481792 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.660084 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.838918 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.841789 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.859207 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.859694 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.859813 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.869149 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.876262 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.877551 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nrvl8" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.877894 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.879181 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 15:12:38 crc kubenswrapper[4806]: I1125 15:12:38.926003 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.003895 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004000 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004028 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004053 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wvqm\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004093 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004123 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004184 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004213 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004264 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.004349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106619 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106723 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106759 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106831 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106886 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106910 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106933 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wvqm\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.106986 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.107008 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.107091 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.107140 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.108738 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.108977 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.109287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.111728 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.116626 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.117982 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.117990 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.121087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.122293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.163593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wvqm\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.175772 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.175839 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac4c1e236f0304110737be3b1d19c933a65d0aea2a553d5c5b453beb19db88e7/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.218597 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.261023 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.263192 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.270873 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" event={"ID":"64d9b559-93b6-4a15-a497-a7caf051dabc","Type":"ContainerStarted","Data":"bff71c6588bfcd1d8e23bbd147a3774625ba3d3e0bcc44626b11076857c8adfa"} Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.271826 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.272089 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.272506 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.272688 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.272990 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.273334 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cvks2" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.275571 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" event={"ID":"3b99dd44-ae01-4f09-975a-77eb055e4813","Type":"ContainerStarted","Data":"e3522db4af2e9a22a3a3a6f3980c0becad94a3248e7df2fca5fa840691f1d92e"} Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.276507 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.292576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413217 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413294 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413357 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-689b7\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413397 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413439 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413485 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413513 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413537 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413639 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.413696 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.493472 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515364 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515469 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515518 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515555 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515587 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-689b7\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515617 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515667 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515699 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515723 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515745 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.515781 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.516551 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.516772 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.518195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.519013 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.519427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.522572 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.523251 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.524643 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.533154 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.533721 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.533763 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c595192220ab734723ac28c88da4d61bccb78937c6216ae7dd707bdc8091fda/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.544190 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-689b7\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.594987 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:39 crc kubenswrapper[4806]: I1125 15:12:39.900680 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.077544 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.296540 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerStarted","Data":"4f0b4a5d435b188954a361f42a481cc89ebf35c519ba152f75bdb848356826eb"} Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.530107 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.531816 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.536405 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.547207 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.547867 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jhnjx" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.549054 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.549531 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.582739 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.618525 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648209 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648400 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648490 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648522 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648547 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648605 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgxng\" (UniqueName: \"kubernetes.io/projected/fc946fac-46fb-45c0-8a69-2e481bf9d947-kube-api-access-zgxng\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.648883 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751084 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751164 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751194 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751218 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751241 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751263 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751280 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.751304 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgxng\" (UniqueName: \"kubernetes.io/projected/fc946fac-46fb-45c0-8a69-2e481bf9d947-kube-api-access-zgxng\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.754025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-generated\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.754221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-config-data-default\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.754290 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-kolla-config\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.757741 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.757778 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0e50cc82c880d0bf174655a7ed17b6837a939d06e57625bf779cf66a37d80b8/globalmount\"" pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.767765 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.769182 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc946fac-46fb-45c0-8a69-2e481bf9d947-operator-scripts\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.777576 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc946fac-46fb-45c0-8a69-2e481bf9d947-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.795213 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgxng\" (UniqueName: \"kubernetes.io/projected/fc946fac-46fb-45c0-8a69-2e481bf9d947-kube-api-access-zgxng\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.836734 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-af70b5e9-9363-4b5d-b4bc-3ccf30a17e28\") pod \"openstack-galera-0\" (UID: \"fc946fac-46fb-45c0-8a69-2e481bf9d947\") " pod="openstack/openstack-galera-0" Nov 25 15:12:40 crc kubenswrapper[4806]: I1125 15:12:40.868682 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.343121 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerStarted","Data":"b1a23c2f3bd4b845252043116dfb1b54d99fd0701fd4f45f6e570b72bd07b88a"} Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.433197 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:12:41 crc kubenswrapper[4806]: W1125 15:12:41.483125 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc946fac_46fb_45c0_8a69_2e481bf9d947.slice/crio-1110a0523072b814b6c8a0ecbb707566f47a9d608be07fadb8eb1d28f967e5d4 WatchSource:0}: Error finding container 1110a0523072b814b6c8a0ecbb707566f47a9d608be07fadb8eb1d28f967e5d4: Status 404 returned error can't find the container with id 1110a0523072b814b6c8a0ecbb707566f47a9d608be07fadb8eb1d28f967e5d4 Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.956073 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.957721 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.964051 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.964135 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mfsqf" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.965613 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.965972 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 15:12:41 crc kubenswrapper[4806]: I1125 15:12:41.971694 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.089803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh2dv\" (UniqueName: \"kubernetes.io/projected/0c667706-daaf-4283-9ebb-64bae95b4914-kube-api-access-sh2dv\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.089938 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.090053 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.090093 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.090215 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7252b46e-a020-42e4-9492-5dd3266e0656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7252b46e-a020-42e4-9492-5dd3266e0656\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.096516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.096706 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.096801 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.198763 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.198859 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh2dv\" (UniqueName: \"kubernetes.io/projected/0c667706-daaf-4283-9ebb-64bae95b4914-kube-api-access-sh2dv\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.198898 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.198954 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.198978 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.199033 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7252b46e-a020-42e4-9492-5dd3266e0656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7252b46e-a020-42e4-9492-5dd3266e0656\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.199069 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.201113 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.200040 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.202985 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.208687 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0c667706-daaf-4283-9ebb-64bae95b4914-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.209521 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.209578 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7252b46e-a020-42e4-9492-5dd3266e0656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7252b46e-a020-42e4-9492-5dd3266e0656\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b768b264e6e33908e1f1c417c894e643839d82474cc6ccf98925d98ee463042/globalmount\"" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.217076 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.219980 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c667706-daaf-4283-9ebb-64bae95b4914-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.221402 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c667706-daaf-4283-9ebb-64bae95b4914-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.251387 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh2dv\" (UniqueName: \"kubernetes.io/projected/0c667706-daaf-4283-9ebb-64bae95b4914-kube-api-access-sh2dv\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.334199 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7252b46e-a020-42e4-9492-5dd3266e0656\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7252b46e-a020-42e4-9492-5dd3266e0656\") pod \"openstack-cell1-galera-0\" (UID: \"0c667706-daaf-4283-9ebb-64bae95b4914\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.348907 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.350492 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.353464 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.355879 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2w5hq" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.356155 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.398743 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.421686 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.422822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrh2t\" (UniqueName: \"kubernetes.io/projected/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kube-api-access-lrh2t\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.422912 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.423086 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-config-data\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.423349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kolla-config\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.454843 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc946fac-46fb-45c0-8a69-2e481bf9d947","Type":"ContainerStarted","Data":"1110a0523072b814b6c8a0ecbb707566f47a9d608be07fadb8eb1d28f967e5d4"} Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.525601 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-config-data\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.525703 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kolla-config\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.525774 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.525802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrh2t\" (UniqueName: \"kubernetes.io/projected/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kube-api-access-lrh2t\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.525822 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.526617 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-config-data\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.536630 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kolla-config\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.550988 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-memcached-tls-certs\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.561866 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-combined-ca-bundle\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.568511 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrh2t\" (UniqueName: \"kubernetes.io/projected/31cd92ea-0a03-4883-9d96-532a9d5c3bd0-kube-api-access-lrh2t\") pod \"memcached-0\" (UID: \"31cd92ea-0a03-4883-9d96-532a9d5c3bd0\") " pod="openstack/memcached-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.605493 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:42 crc kubenswrapper[4806]: I1125 15:12:42.730631 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 15:12:43 crc kubenswrapper[4806]: I1125 15:12:43.399685 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:12:43 crc kubenswrapper[4806]: W1125 15:12:43.452969 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c667706_daaf_4283_9ebb_64bae95b4914.slice/crio-5cc17db2f7bcca1767c6a4c268c83df5bde9186f2013e89fead518dcfd13fba5 WatchSource:0}: Error finding container 5cc17db2f7bcca1767c6a4c268c83df5bde9186f2013e89fead518dcfd13fba5: Status 404 returned error can't find the container with id 5cc17db2f7bcca1767c6a4c268c83df5bde9186f2013e89fead518dcfd13fba5 Nov 25 15:12:43 crc kubenswrapper[4806]: I1125 15:12:43.476518 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.548804 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31cd92ea-0a03-4883-9d96-532a9d5c3bd0","Type":"ContainerStarted","Data":"ddbbf28d4ca3a984859d0269768d06aaf1490f70dd77119ab700d9152e287da8"} Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.562884 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c667706-daaf-4283-9ebb-64bae95b4914","Type":"ContainerStarted","Data":"5cc17db2f7bcca1767c6a4c268c83df5bde9186f2013e89fead518dcfd13fba5"} Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.729786 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.732837 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.737601 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ljrhl" Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.753168 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:12:44 crc kubenswrapper[4806]: I1125 15:12:44.933235 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpd2s\" (UniqueName: \"kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s\") pod \"kube-state-metrics-0\" (UID: \"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51\") " pod="openstack/kube-state-metrics-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.043855 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpd2s\" (UniqueName: \"kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s\") pod \"kube-state-metrics-0\" (UID: \"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51\") " pod="openstack/kube-state-metrics-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.096366 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpd2s\" (UniqueName: \"kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s\") pod \"kube-state-metrics-0\" (UID: \"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51\") " pod="openstack/kube-state-metrics-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.387885 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.640251 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.651942 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.664942 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.664957 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.665167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.665358 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-68694" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.667762 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.688430 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780021 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdvmb\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-kube-api-access-tdvmb\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780132 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780157 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780202 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780354 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780408 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.780427 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.897663 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898161 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdvmb\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-kube-api-access-tdvmb\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898307 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898397 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898430 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:45 crc kubenswrapper[4806]: I1125 15:12:45.898459 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:45.999825 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.009148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.020790 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdvmb\" (UniqueName: \"kubernetes.io/projected/82ed644a-fbd9-4ccc-a348-37293a1795f5-kube-api-access-tdvmb\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.029131 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.029949 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.035623 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.035748 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/82ed644a-fbd9-4ccc-a348-37293a1795f5-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"82ed644a-fbd9-4ccc-a348-37293a1795f5\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.238323 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.240436 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.263993 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.264221 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.264309 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.264518 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.264574 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.264615 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8x9zw" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.294720 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.296732 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415521 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x5mx\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415687 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415737 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415769 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415798 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415836 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.415924 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521421 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521511 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x5mx\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521575 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521598 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521622 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.521651 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.523185 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.541070 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.541777 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.542183 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.542300 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.588167 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.612602 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.628123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x5mx\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.636246 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.636293 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b3a8672825276a13a5527ac11d1dc07a9dde209d1a0c9593ce9ca59149f844e0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:46 crc kubenswrapper[4806]: I1125 15:12:46.921376 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:47 crc kubenswrapper[4806]: I1125 15:12:47.213455 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.568215 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l6mv2"] Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.569874 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.575399 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.575892 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.576047 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9rljq" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.616261 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-svmbm"] Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.618577 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.639218 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l6mv2"] Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.664869 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-svmbm"] Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697532 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-combined-ca-bundle\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697598 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-run\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697649 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-etc-ovs\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697721 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-log-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697740 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697766 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ebac08b-471e-4b28-98fb-b9bab2e3f505-scripts\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697821 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrjn9\" (UniqueName: \"kubernetes.io/projected/0ebac08b-471e-4b28-98fb-b9bab2e3f505-kube-api-access-hrjn9\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697853 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-ovn-controller-tls-certs\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697938 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-log\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697963 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-lib\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.697990 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.698055 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7dsp\" (UniqueName: \"kubernetes.io/projected/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-kube-api-access-k7dsp\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.698084 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-scripts\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.799812 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-log-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.799851 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.799915 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ebac08b-471e-4b28-98fb-b9bab2e3f505-scripts\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.799972 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrjn9\" (UniqueName: \"kubernetes.io/projected/0ebac08b-471e-4b28-98fb-b9bab2e3f505-kube-api-access-hrjn9\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.799995 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-ovn-controller-tls-certs\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800016 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-log\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800033 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-lib\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800052 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800273 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7dsp\" (UniqueName: \"kubernetes.io/projected/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-kube-api-access-k7dsp\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800296 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-scripts\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800468 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-combined-ca-bundle\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800498 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-run\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.800551 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-etc-ovs\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.803259 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-etc-ovs\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.803516 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.803591 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-log\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.803675 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-log-ovn\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.803697 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-var-run\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.804811 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-run\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.804924 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0ebac08b-471e-4b28-98fb-b9bab2e3f505-var-lib\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.805932 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-scripts\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.808839 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0ebac08b-471e-4b28-98fb-b9bab2e3f505-scripts\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.846229 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-combined-ca-bundle\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.846278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-ovn-controller-tls-certs\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.868559 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrjn9\" (UniqueName: \"kubernetes.io/projected/0ebac08b-471e-4b28-98fb-b9bab2e3f505-kube-api-access-hrjn9\") pod \"ovn-controller-ovs-svmbm\" (UID: \"0ebac08b-471e-4b28-98fb-b9bab2e3f505\") " pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.877101 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7dsp\" (UniqueName: \"kubernetes.io/projected/c90d07c6-4f04-48d1-ae1f-bb15f60ba44b-kube-api-access-k7dsp\") pod \"ovn-controller-l6mv2\" (UID: \"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b\") " pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.916856 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2" Nov 25 15:12:48 crc kubenswrapper[4806]: I1125 15:12:48.965211 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:12:49 crc kubenswrapper[4806]: I1125 15:12:49.819270 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51","Type":"ContainerStarted","Data":"49fb9502f25e97d668367e42308c965f982dafabd12c972dacfcb13f7717f89e"} Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.006879 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.016803 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:12:52 crc kubenswrapper[4806]: W1125 15:12:52.096452 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82ed644a_fbd9_4ccc_a348_37293a1795f5.slice/crio-bf34c6eb6a5cf5b180dd27e68acac85dac99d827b1acf54464a6f16a8cc59f83 WatchSource:0}: Error finding container bf34c6eb6a5cf5b180dd27e68acac85dac99d827b1acf54464a6f16a8cc59f83: Status 404 returned error can't find the container with id bf34c6eb6a5cf5b180dd27e68acac85dac99d827b1acf54464a6f16a8cc59f83 Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.417929 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l6mv2"] Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.944764 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"82ed644a-fbd9-4ccc-a348-37293a1795f5","Type":"ContainerStarted","Data":"bf34c6eb6a5cf5b180dd27e68acac85dac99d827b1acf54464a6f16a8cc59f83"} Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.948638 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l6mv2" event={"ID":"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b","Type":"ContainerStarted","Data":"bef8b3d851f5f6e0b8fb1d8e53fa0c28f2c384e3dd2406d20bff81069de32a25"} Nov 25 15:12:52 crc kubenswrapper[4806]: I1125 15:12:52.957481 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerStarted","Data":"2138b165c5f647f03214a9ef259bdfa1b649fd7b209066f559c823dd4a0c371c"} Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.151210 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-svmbm"] Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.300371 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.302126 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.310137 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zrrhc" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.310586 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.310591 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.310598 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.310843 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.318732 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.374916 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.375971 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376020 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376361 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxkd\" (UniqueName: \"kubernetes.io/projected/ec42948f-25cf-4ae0-8553-dfd5dcc43021-kube-api-access-vhxkd\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376477 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376716 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.376791 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.478858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.478950 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.478982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.479001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxkd\" (UniqueName: \"kubernetes.io/projected/ec42948f-25cf-4ae0-8553-dfd5dcc43021-kube-api-access-vhxkd\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.479024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.479075 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.479111 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.479136 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.482642 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.485157 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.485203 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0677f682933b71426afd6930432b2e825445714f1a344dd612cccc6a88aa97ea/globalmount\"" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.487639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.489202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec42948f-25cf-4ae0-8553-dfd5dcc43021-config\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.492729 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.492751 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.493217 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec42948f-25cf-4ae0-8553-dfd5dcc43021-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.510594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxkd\" (UniqueName: \"kubernetes.io/projected/ec42948f-25cf-4ae0-8553-dfd5dcc43021-kube-api-access-vhxkd\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.552661 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.559239 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.566536 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mjhmx" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.566719 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.566782 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.567329 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.569413 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-13b2a46e-b811-4de8-855f-6e1e01523aa0\") pod \"ovsdbserver-nb-0\" (UID: \"ec42948f-25cf-4ae0-8553-dfd5dcc43021\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.616413 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.653239 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683405 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683513 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683546 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683664 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683690 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683714 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.683739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2npb8\" (UniqueName: \"kubernetes.io/projected/2235e648-6ec4-4d98-a879-46f4f56b93e0-kube-api-access-2npb8\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.785801 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.785875 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.785906 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.785930 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2npb8\" (UniqueName: \"kubernetes.io/projected/2235e648-6ec4-4d98-a879-46f4f56b93e0-kube-api-access-2npb8\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.786020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.786082 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.786106 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.786136 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.788084 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.788493 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.814169 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2235e648-6ec4-4d98-a879-46f4f56b93e0-config\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.814883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.815025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.815152 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.815074 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2235e648-6ec4-4d98-a879-46f4f56b93e0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.815210 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/83543accd917c6f684d604f9a96963880f2f4729be51f206c382b8f94f641a9f/globalmount\"" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.818293 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2npb8\" (UniqueName: \"kubernetes.io/projected/2235e648-6ec4-4d98-a879-46f4f56b93e0-kube-api-access-2npb8\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.858036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ba88f5a9-5f0c-427d-9d9d-095eda4c39b5\") pod \"ovsdbserver-sb-0\" (UID: \"2235e648-6ec4-4d98-a879-46f4f56b93e0\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:53 crc kubenswrapper[4806]: I1125 15:12:53.922936 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.175487 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dhcsq"] Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.177515 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.190200 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.233949 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhcsq"] Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.297828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovn-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.297925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovs-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.297982 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.298126 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-combined-ca-bundle\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.298522 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxsq\" (UniqueName: \"kubernetes.io/projected/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-kube-api-access-tmxsq\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.298708 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-config\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400336 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-combined-ca-bundle\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400440 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmxsq\" (UniqueName: \"kubernetes.io/projected/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-kube-api-access-tmxsq\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400514 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-config\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400532 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovn-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400549 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovs-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.400577 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.401423 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovn-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.401547 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-ovs-rundir\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.401848 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-config\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.406594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.409262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-combined-ca-bundle\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.444218 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmxsq\" (UniqueName: \"kubernetes.io/projected/cb8eb50b-2bea-43d0-b0b6-698bc3709b1d-kube-api-access-tmxsq\") pod \"ovn-controller-metrics-dhcsq\" (UID: \"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d\") " pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:54 crc kubenswrapper[4806]: I1125 15:12:54.517331 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dhcsq" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.888545 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7"] Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.890939 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.893559 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.897806 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-x5qq4" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.898299 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.898350 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.898564 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.900823 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7"] Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.935468 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.935601 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.935740 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfn5m\" (UniqueName: \"kubernetes.io/projected/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-kube-api-access-bfn5m\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.935771 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:55 crc kubenswrapper[4806]: I1125 15:12:55.935840 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-config\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.037279 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfn5m\" (UniqueName: \"kubernetes.io/projected/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-kube-api-access-bfn5m\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.037359 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.037420 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-config\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.037508 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.037543 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.038940 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.039123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-config\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.049123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.049247 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.065974 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfn5m\" (UniqueName: \"kubernetes.io/projected/4c17fab0-86a8-4e8b-b790-c0a9c91979a3-kube-api-access-bfn5m\") pod \"cloudkitty-lokistack-distributor-56cd74f89f-bs2h7\" (UID: \"4c17fab0-86a8-4e8b-b790-c0a9c91979a3\") " pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.079229 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.081245 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.084876 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.085296 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.085514 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.129588 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.140191 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzrx\" (UniqueName: \"kubernetes.io/projected/39c749dc-99ca-45d4-b49a-3e8925e0230a-kube-api-access-jwzrx\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.141655 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-config\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.141969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.142138 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.142260 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.142867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.224726 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.249765 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-config\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.249841 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.249890 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.249915 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.250004 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.250035 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzrx\" (UniqueName: \"kubernetes.io/projected/39c749dc-99ca-45d4-b49a-3e8925e0230a-kube-api-access-jwzrx\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.251934 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-config\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.255722 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.260036 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.265427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.267000 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.271729 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.271581 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.283875 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.291851 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.303406 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzrx\" (UniqueName: \"kubernetes.io/projected/39c749dc-99ca-45d4-b49a-3e8925e0230a-kube-api-access-jwzrx\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.315256 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/39c749dc-99ca-45d4-b49a-3e8925e0230a-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-548665d79b-vt8jx\" (UID: \"39c749dc-99ca-45d4-b49a-3e8925e0230a\") " pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.353097 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jb6\" (UniqueName: \"kubernetes.io/projected/f0dc94d5-1470-40f4-8969-84c9690164c8-kube-api-access-f7jb6\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.353171 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.353395 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-config\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.354107 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.354231 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.416618 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.419485 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424158 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424289 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424748 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424803 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424848 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.424910 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-sblq5" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.425015 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.448212 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.454848 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.456061 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.458864 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-config\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.458988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.459095 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.459159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jb6\" (UniqueName: \"kubernetes.io/projected/f0dc94d5-1470-40f4-8969-84c9690164c8-kube-api-access-f7jb6\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.459242 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.460450 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-config\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.461053 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.478774 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.479709 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.486148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/f0dc94d5-1470-40f4-8969-84c9690164c8-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.520778 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jb6\" (UniqueName: \"kubernetes.io/projected/f0dc94d5-1470-40f4-8969-84c9690164c8-kube-api-access-f7jb6\") pod \"cloudkitty-lokistack-query-frontend-779849886d-mzf6h\" (UID: \"f0dc94d5-1470-40f4-8969-84c9690164c8\") " pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.520926 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h"] Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562014 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562094 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbgjd\" (UniqueName: \"kubernetes.io/projected/1b3c25ba-4426-45b4-8f79-95fd0e07823b-kube-api-access-cbgjd\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562136 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562203 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562229 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562275 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562300 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562334 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562354 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562378 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562441 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562468 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562532 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljp4w\" (UniqueName: \"kubernetes.io/projected/a1a1861d-9755-4f0b-8644-37e0e35584e1-kube-api-access-ljp4w\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562550 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562575 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562594 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.562621 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664545 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664602 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbgjd\" (UniqueName: \"kubernetes.io/projected/1b3c25ba-4426-45b4-8f79-95fd0e07823b-kube-api-access-cbgjd\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664634 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664654 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664672 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664714 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664732 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664749 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664766 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664782 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664799 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664830 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664850 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljp4w\" (UniqueName: \"kubernetes.io/projected/a1a1861d-9755-4f0b-8644-37e0e35584e1-kube-api-access-ljp4w\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664912 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664930 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664947 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.664968 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.666716 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.666808 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.666838 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667009 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667331 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667504 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667603 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-rbac\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.667973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.668799 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.670361 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.670810 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tls-secret\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.671274 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.671298 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.672542 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1b3c25ba-4426-45b4-8f79-95fd0e07823b-tenants\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.679277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/a1a1861d-9755-4f0b-8644-37e0e35584e1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.684156 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbgjd\" (UniqueName: \"kubernetes.io/projected/1b3c25ba-4426-45b4-8f79-95fd0e07823b-kube-api-access-cbgjd\") pod \"cloudkitty-lokistack-gateway-76cc998948-gbg2h\" (UID: \"1b3c25ba-4426-45b4-8f79-95fd0e07823b\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.686141 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljp4w\" (UniqueName: \"kubernetes.io/projected/a1a1861d-9755-4f0b-8644-37e0e35584e1-kube-api-access-ljp4w\") pod \"cloudkitty-lokistack-gateway-76cc998948-fxwbg\" (UID: \"a1a1861d-9755-4f0b-8644-37e0e35584e1\") " pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.695375 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.748902 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:12:56 crc kubenswrapper[4806]: I1125 15:12:56.834737 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.027555 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.034684 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.037078 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.037333 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.042999 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176469 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176584 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9cc\" (UniqueName: \"kubernetes.io/projected/cdc49832-6f51-4954-ab25-3f84f6956d1f-kube-api-access-qm9cc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176716 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176777 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176801 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176846 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.176869 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.179755 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.183953 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.190524 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.190849 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.192471 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.278976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279073 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279243 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279553 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9cc\" (UniqueName: \"kubernetes.io/projected/cdc49832-6f51-4954-ab25-3f84f6956d1f-kube-api-access-qm9cc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.279959 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.280269 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281281 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281351 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281542 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281648 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281653 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281741 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsn6d\" (UniqueName: \"kubernetes.io/projected/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-kube-api-access-lsn6d\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281778 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281860 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.281945 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.282033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.283337 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.286146 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.288918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.293683 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/cdc49832-6f51-4954-ab25-3f84f6956d1f-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.302635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9cc\" (UniqueName: \"kubernetes.io/projected/cdc49832-6f51-4954-ab25-3f84f6956d1f-kube-api-access-qm9cc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.321582 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.338966 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.341564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.343956 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.344789 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"cdc49832-6f51-4954-ab25-3f84f6956d1f\") " pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.345112 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.355855 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.384697 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.384789 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.384860 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.384890 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.385225 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.385451 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.385518 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.385858 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.385993 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386163 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsn6d\" (UniqueName: \"kubernetes.io/projected/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-kube-api-access-lsn6d\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386352 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698qm\" (UniqueName: \"kubernetes.io/projected/b61e9f82-3559-4710-8b06-4bc2c5997224-kube-api-access-698qm\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386430 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.386806 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.388644 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.389764 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.389784 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.393649 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.408466 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsn6d\" (UniqueName: \"kubernetes.io/projected/b6ecb712-3cf0-4cd4-b823-0ffd452437ce-kube-api-access-lsn6d\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.420667 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.423271 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"b6ecb712-3cf0-4cd4-b823-0ffd452437ce\") " pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.488947 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.489452 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.489645 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.489904 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698qm\" (UniqueName: \"kubernetes.io/projected/b61e9f82-3559-4710-8b06-4bc2c5997224-kube-api-access-698qm\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.490055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.490195 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.490261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.490587 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.490831 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.493563 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.494419 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.495709 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.496694 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/b61e9f82-3559-4710-8b06-4bc2c5997224-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.511980 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698qm\" (UniqueName: \"kubernetes.io/projected/b61e9f82-3559-4710-8b06-4bc2c5997224-kube-api-access-698qm\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.512495 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.517619 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"b61e9f82-3559-4710-8b06-4bc2c5997224\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:57 crc kubenswrapper[4806]: I1125 15:12:57.755663 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:12:58 crc kubenswrapper[4806]: W1125 15:12:58.032773 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ebac08b_471e_4b28_98fb_b9bab2e3f505.slice/crio-7bd70a9cecec22d873a74f3dd1046c89739f78eaccb8e3af0f3e6f8a3702b51a WatchSource:0}: Error finding container 7bd70a9cecec22d873a74f3dd1046c89739f78eaccb8e3af0f3e6f8a3702b51a: Status 404 returned error can't find the container with id 7bd70a9cecec22d873a74f3dd1046c89739f78eaccb8e3af0f3e6f8a3702b51a Nov 25 15:12:59 crc kubenswrapper[4806]: I1125 15:12:59.029069 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-svmbm" event={"ID":"0ebac08b-471e-4b28-98fb-b9bab2e3f505","Type":"ContainerStarted","Data":"7bd70a9cecec22d873a74f3dd1046c89739f78eaccb8e3af0f3e6f8a3702b51a"} Nov 25 15:13:05 crc kubenswrapper[4806]: E1125 15:13:05.900959 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 25 15:13:05 crc kubenswrapper[4806]: E1125 15:13:05.901892 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zgxng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(fc946fac-46fb-45c0-8a69-2e481bf9d947): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:05 crc kubenswrapper[4806]: E1125 15:13:05.904514 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="fc946fac-46fb-45c0-8a69-2e481bf9d947" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.053504 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.054087 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sh2dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(0c667706-daaf-4283-9ebb-64bae95b4914): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.056143 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="0c667706-daaf-4283-9ebb-64bae95b4914" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.102071 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="0c667706-daaf-4283-9ebb-64bae95b4914" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.105970 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="fc946fac-46fb-45c0-8a69-2e481bf9d947" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.674269 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.674553 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5f6h64bhf9h59bh5c9h5fhbfh644hdh589h68ch58fh5b6hddh557h556h75h66h5bh6bh58hd5h554h5c9h54fhdh67h5fdh64bh59ch5bh97q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrh2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(31cd92ea-0a03-4883-9d96-532a9d5c3bd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:06 crc kubenswrapper[4806]: E1125 15:13:06.675988 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="31cd92ea-0a03-4883-9d96-532a9d5c3bd0" Nov 25 15:13:07 crc kubenswrapper[4806]: E1125 15:13:07.111486 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="31cd92ea-0a03-4883-9d96-532a9d5c3bd0" Nov 25 15:13:11 crc kubenswrapper[4806]: I1125 15:13:11.483850 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:13:15 crc kubenswrapper[4806]: E1125 15:13:15.175560 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified" Nov 25 15:13:15 crc kubenswrapper[4806]: E1125 15:13:15.176102 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6h6dh97h94h676h66h576h5c4h56dh54dh6dh548h548h579h5bh55bh667h6fh5c4h547h69h64bh65dh9chfbh564h54dh5b5h5fh689hdfh58q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrjn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-svmbm_openstack(0ebac08b-471e-4b28-98fb-b9bab2e3f505): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:15 crc kubenswrapper[4806]: E1125 15:13:15.177491 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-svmbm" podUID="0ebac08b-471e-4b28-98fb-b9bab2e3f505" Nov 25 15:13:15 crc kubenswrapper[4806]: E1125 15:13:15.285441 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified\\\"\"" pod="openstack/ovn-controller-ovs-svmbm" podUID="0ebac08b-471e-4b28-98fb-b9bab2e3f505" Nov 25 15:13:16 crc kubenswrapper[4806]: W1125 15:13:16.221768 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec42948f_25cf_4ae0_8553_dfd5dcc43021.slice/crio-4f05c5344f09be201ca85deb279d70c8e5bf55f109f9251987dbf126b6ef27c7 WatchSource:0}: Error finding container 4f05c5344f09be201ca85deb279d70c8e5bf55f109f9251987dbf126b6ef27c7: Status 404 returned error can't find the container with id 4f05c5344f09be201ca85deb279d70c8e5bf55f109f9251987dbf126b6ef27c7 Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.261506 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.261756 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w8jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-mn6ms_openstack(64d9b559-93b6-4a15-a497-a7caf051dabc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.263503 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" podUID="64d9b559-93b6-4a15-a497-a7caf051dabc" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.316150 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.317048 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7z9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-4hs22_openstack(7d59b85b-a8d5-4451-aad3-6d53ba2798a4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.318487 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" podUID="7d59b85b-a8d5-4451-aad3-6d53ba2798a4" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.397675 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.397882 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w2vkn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-njrj8_openstack(3b99dd44-ae01-4f09-975a-77eb055e4813): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:16 crc kubenswrapper[4806]: E1125 15:13:16.399043 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" podUID="3b99dd44-ae01-4f09-975a-77eb055e4813" Nov 25 15:13:16 crc kubenswrapper[4806]: I1125 15:13:16.670052 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Nov 25 15:13:16 crc kubenswrapper[4806]: I1125 15:13:16.845567 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.054586 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.054950 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6h6dh97h94h676h66h576h5c4h56dh54dh6dh548h548h579h5bh55bh667h6fh5c4h547h69h64bh65dh9chfbh564h54dh5b5h5fh689hdfh58q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7dsp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-l6mv2_openstack(c90d07c6-4f04-48d1-ae1f-bb15f60ba44b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.056437 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-l6mv2" podUID="c90d07c6-4f04-48d1-ae1f-bb15f60ba44b" Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.208070 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec42948f-25cf-4ae0-8553-dfd5dcc43021","Type":"ContainerStarted","Data":"4f05c5344f09be201ca85deb279d70c8e5bf55f109f9251987dbf126b6ef27c7"} Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.210864 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" podUID="64d9b559-93b6-4a15-a497-a7caf051dabc" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.210969 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" podUID="3b99dd44-ae01-4f09-975a-77eb055e4813" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.212042 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-l6mv2" podUID="c90d07c6-4f04-48d1-ae1f-bb15f60ba44b" Nov 25 15:13:17 crc kubenswrapper[4806]: W1125 15:13:17.306265 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdc49832_6f51_4954_ab25_3f84f6956d1f.slice/crio-2f6fcea66c6be69e0eb9e5b03d07776203826ba5be3c1425ea36462eb88d51c7 WatchSource:0}: Error finding container 2f6fcea66c6be69e0eb9e5b03d07776203826ba5be3c1425ea36462eb88d51c7: Status 404 returned error can't find the container with id 2f6fcea66c6be69e0eb9e5b03d07776203826ba5be3c1425ea36462eb88d51c7 Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.347279 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.347495 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-scmk2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-ppmxl_openstack(994363da-e750-4d6d-9559-7eca7054bd4b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:13:17 crc kubenswrapper[4806]: E1125 15:13:17.348797 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" podUID="994363da-e750-4d6d-9559-7eca7054bd4b" Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.660273 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.673020 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.684491 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dhcsq"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.693787 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.702702 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.711201 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.746586 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.789195 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Nov 25 15:13:17 crc kubenswrapper[4806]: W1125 15:13:17.887853 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1a1861d_9755_4f0b_8644_37e0e35584e1.slice/crio-c9790aa8dab1854fea67ec8b00fd9398c026eecc92b836ffc48b5a51e0b56e73 WatchSource:0}: Error finding container c9790aa8dab1854fea67ec8b00fd9398c026eecc92b836ffc48b5a51e0b56e73: Status 404 returned error can't find the container with id c9790aa8dab1854fea67ec8b00fd9398c026eecc92b836ffc48b5a51e0b56e73 Nov 25 15:13:17 crc kubenswrapper[4806]: W1125 15:13:17.892863 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb8eb50b_2bea_43d0_b0b6_698bc3709b1d.slice/crio-4dea698fb99c8543b14663545deb5076dac4794c697883e3e646deb93d7216e0 WatchSource:0}: Error finding container 4dea698fb99c8543b14663545deb5076dac4794c697883e3e646deb93d7216e0: Status 404 returned error can't find the container with id 4dea698fb99c8543b14663545deb5076dac4794c697883e3e646deb93d7216e0 Nov 25 15:13:17 crc kubenswrapper[4806]: W1125 15:13:17.895832 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2235e648_6ec4_4d98_a879_46f4f56b93e0.slice/crio-514de486ef32c502d2f4a07fb47e5ec29d74ee8026e229da87e1ba2e4a069e3b WatchSource:0}: Error finding container 514de486ef32c502d2f4a07fb47e5ec29d74ee8026e229da87e1ba2e4a069e3b: Status 404 returned error can't find the container with id 514de486ef32c502d2f4a07fb47e5ec29d74ee8026e229da87e1ba2e4a069e3b Nov 25 15:13:17 crc kubenswrapper[4806]: I1125 15:13:17.921685 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.018070 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config\") pod \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.018211 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7z9j\" (UniqueName: \"kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j\") pod \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\" (UID: \"7d59b85b-a8d5-4451-aad3-6d53ba2798a4\") " Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.019232 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config" (OuterVolumeSpecName: "config") pod "7d59b85b-a8d5-4451-aad3-6d53ba2798a4" (UID: "7d59b85b-a8d5-4451-aad3-6d53ba2798a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.025102 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j" (OuterVolumeSpecName: "kube-api-access-x7z9j") pod "7d59b85b-a8d5-4451-aad3-6d53ba2798a4" (UID: "7d59b85b-a8d5-4451-aad3-6d53ba2798a4"). InnerVolumeSpecName "kube-api-access-x7z9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.120498 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.120526 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7z9j\" (UniqueName: \"kubernetes.io/projected/7d59b85b-a8d5-4451-aad3-6d53ba2798a4-kube-api-access-x7z9j\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.215835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2235e648-6ec4-4d98-a879-46f4f56b93e0","Type":"ContainerStarted","Data":"514de486ef32c502d2f4a07fb47e5ec29d74ee8026e229da87e1ba2e4a069e3b"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.229747 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" event={"ID":"1b3c25ba-4426-45b4-8f79-95fd0e07823b","Type":"ContainerStarted","Data":"0205ca9f6e3ca25eb8037feda6d5dad2ef8f5d8acc3ac002a20047da6c4c86ed"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.231402 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" event={"ID":"a1a1861d-9755-4f0b-8644-37e0e35584e1","Type":"ContainerStarted","Data":"c9790aa8dab1854fea67ec8b00fd9398c026eecc92b836ffc48b5a51e0b56e73"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.232940 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"b61e9f82-3559-4710-8b06-4bc2c5997224","Type":"ContainerStarted","Data":"45f1be6014b82e69293131eaac4705f972b94a5999a7b9706537d25ea727a190"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.233962 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" event={"ID":"4c17fab0-86a8-4e8b-b790-c0a9c91979a3","Type":"ContainerStarted","Data":"808b1e261f0e6516f6f43dbf7bc12e81e48eb9ca59c63666ef0102d180634ccf"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.235046 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"b6ecb712-3cf0-4cd4-b823-0ffd452437ce","Type":"ContainerStarted","Data":"a34136dacd3314d5d19a2c75d13827704ccfc58b4c0d12a4514eb7849cd94dba"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.237252 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" event={"ID":"7d59b85b-a8d5-4451-aad3-6d53ba2798a4","Type":"ContainerDied","Data":"f28fd12b18cdec3c494c70468d0719e20d8bb23379b39e5214e6f7e62db47242"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.237391 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4hs22" Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.239592 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" event={"ID":"f0dc94d5-1470-40f4-8969-84c9690164c8","Type":"ContainerStarted","Data":"bbf9377db27f0f8f0d5e86a95f4984e0fb0c261337e580ed0f8c4892768ca4d4"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.240530 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"cdc49832-6f51-4954-ab25-3f84f6956d1f","Type":"ContainerStarted","Data":"2f6fcea66c6be69e0eb9e5b03d07776203826ba5be3c1425ea36462eb88d51c7"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.241280 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" event={"ID":"39c749dc-99ca-45d4-b49a-3e8925e0230a","Type":"ContainerStarted","Data":"3e24604694213b6a8cdd86983e4d20fc3ffa3db9cb32273d19f6890707026017"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.243243 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhcsq" event={"ID":"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d","Type":"ContainerStarted","Data":"4dea698fb99c8543b14663545deb5076dac4794c697883e3e646deb93d7216e0"} Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.424452 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.432538 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4hs22"] Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.934693 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:18 crc kubenswrapper[4806]: I1125 15:13:18.934753 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.132804 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.142846 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config\") pod \"994363da-e750-4d6d-9559-7eca7054bd4b\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.142996 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scmk2\" (UniqueName: \"kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2\") pod \"994363da-e750-4d6d-9559-7eca7054bd4b\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.143043 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc\") pod \"994363da-e750-4d6d-9559-7eca7054bd4b\" (UID: \"994363da-e750-4d6d-9559-7eca7054bd4b\") " Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.143661 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "994363da-e750-4d6d-9559-7eca7054bd4b" (UID: "994363da-e750-4d6d-9559-7eca7054bd4b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.143678 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config" (OuterVolumeSpecName: "config") pod "994363da-e750-4d6d-9559-7eca7054bd4b" (UID: "994363da-e750-4d6d-9559-7eca7054bd4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.143918 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.143944 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/994363da-e750-4d6d-9559-7eca7054bd4b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.156778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2" (OuterVolumeSpecName: "kube-api-access-scmk2") pod "994363da-e750-4d6d-9559-7eca7054bd4b" (UID: "994363da-e750-4d6d-9559-7eca7054bd4b"). InnerVolumeSpecName "kube-api-access-scmk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.244835 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scmk2\" (UniqueName: \"kubernetes.io/projected/994363da-e750-4d6d-9559-7eca7054bd4b-kube-api-access-scmk2\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.251369 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" event={"ID":"994363da-e750-4d6d-9559-7eca7054bd4b","Type":"ContainerDied","Data":"9b224459560cd53965cf193eef9261f4697ea0747e57ae21285099bbe57b5726"} Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.251507 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-ppmxl" Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.317394 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:13:19 crc kubenswrapper[4806]: I1125 15:13:19.325969 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-ppmxl"] Nov 25 15:13:20 crc kubenswrapper[4806]: I1125 15:13:20.109084 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d59b85b-a8d5-4451-aad3-6d53ba2798a4" path="/var/lib/kubelet/pods/7d59b85b-a8d5-4451-aad3-6d53ba2798a4/volumes" Nov 25 15:13:20 crc kubenswrapper[4806]: I1125 15:13:20.109605 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="994363da-e750-4d6d-9559-7eca7054bd4b" path="/var/lib/kubelet/pods/994363da-e750-4d6d-9559-7eca7054bd4b/volumes" Nov 25 15:13:21 crc kubenswrapper[4806]: I1125 15:13:21.477802 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerStarted","Data":"007c3d7c4479c3e54daabc30a491b68f01e37829f6df5622da6a3a767e77053b"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.554838 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc946fac-46fb-45c0-8a69-2e481bf9d947","Type":"ContainerStarted","Data":"06c7add753d9656b03ddf3ef2ecefdf3fe27cff4663650a99bae5b9716daa2a4"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.560592 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" event={"ID":"4c17fab0-86a8-4e8b-b790-c0a9c91979a3","Type":"ContainerStarted","Data":"4124ad93b605b66704805d4712569847563d2cda166707e59ee3330558a5db9d"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.560765 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.563016 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"b6ecb712-3cf0-4cd4-b823-0ffd452437ce","Type":"ContainerStarted","Data":"ea5bdc7ccbbf619d906cc85f2edeca616a5c12ee3f2bdfd1538e5bbecc583120"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.563178 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.565641 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-svmbm" event={"ID":"0ebac08b-471e-4b28-98fb-b9bab2e3f505","Type":"ContainerStarted","Data":"e6f57704c236607a1f5805303c8bee571b5a30dce36f115d24d21f0613ed50bf"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.570975 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerStarted","Data":"75b09608f37c2be3772760339ed3e063996e9a92d36e7fb7ee974e5892679540"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.574148 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" event={"ID":"f0dc94d5-1470-40f4-8969-84c9690164c8","Type":"ContainerStarted","Data":"ef60c99265eb33255242d802f4d130c6bce82cd8846ec40f607dacad3193af6d"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.574273 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.576954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"b61e9f82-3559-4710-8b06-4bc2c5997224","Type":"ContainerStarted","Data":"0785dce8ffb4cda7e05d774dc3763c557822f09a2172a10eeec8f96ff419fde9"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.578094 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.579592 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2235e648-6ec4-4d98-a879-46f4f56b93e0","Type":"ContainerStarted","Data":"32fbada0ac1c527bb141eb60d1e71fce6bfe635e51476b30773b46a5a77fc316"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.581096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"82ed644a-fbd9-4ccc-a348-37293a1795f5","Type":"ContainerStarted","Data":"ab5c614ccfe699069e3e9092ebeaff187b70b09d82a1e3c7e2fc62342b9f3838"} Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.638216 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" podStartSLOduration=24.453565226 podStartE2EDuration="33.638192971s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.875558551 +0000 UTC m=+1230.527700962" lastFinishedPulling="2025-11-25 15:13:27.060186296 +0000 UTC m=+1239.712328707" observedRunningTime="2025-11-25 15:13:29.628633692 +0000 UTC m=+1242.280776103" watchObservedRunningTime="2025-11-25 15:13:29.638192971 +0000 UTC m=+1242.290335382" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.655529 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" podStartSLOduration=25.44947573 podStartE2EDuration="34.655508808s" podCreationTimestamp="2025-11-25 15:12:55 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.87625156 +0000 UTC m=+1230.528393961" lastFinishedPulling="2025-11-25 15:13:27.082284628 +0000 UTC m=+1239.734427039" observedRunningTime="2025-11-25 15:13:29.652925655 +0000 UTC m=+1242.305068076" watchObservedRunningTime="2025-11-25 15:13:29.655508808 +0000 UTC m=+1242.307651219" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.671262 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=24.797809072 podStartE2EDuration="33.671244201s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:18.189670459 +0000 UTC m=+1230.841812870" lastFinishedPulling="2025-11-25 15:13:27.063105588 +0000 UTC m=+1239.715247999" observedRunningTime="2025-11-25 15:13:29.669908363 +0000 UTC m=+1242.322050804" watchObservedRunningTime="2025-11-25 15:13:29.671244201 +0000 UTC m=+1242.323386612" Nov 25 15:13:29 crc kubenswrapper[4806]: I1125 15:13:29.698922 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=23.9230653 podStartE2EDuration="33.698895349s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.306416958 +0000 UTC m=+1229.958559369" lastFinishedPulling="2025-11-25 15:13:27.082247007 +0000 UTC m=+1239.734389418" observedRunningTime="2025-11-25 15:13:29.694098484 +0000 UTC m=+1242.346240895" watchObservedRunningTime="2025-11-25 15:13:29.698895349 +0000 UTC m=+1242.351037760" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.591386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec42948f-25cf-4ae0-8553-dfd5dcc43021","Type":"ContainerStarted","Data":"e9ab7ca3b5b4d7730bb5fb4852cedbd5f77a9b9247f6af66990011974fc5a860"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.591607 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"ec42948f-25cf-4ae0-8553-dfd5dcc43021","Type":"ContainerStarted","Data":"0e07ef9265e62c6c44d1d5090912f686eb0c2694eefd3250c83aa22ab030d122"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.593342 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"cdc49832-6f51-4954-ab25-3f84f6956d1f","Type":"ContainerStarted","Data":"0a80915e122a7d9f420123a24be0bdf9021048fd06eb19bb70e27efb5d88b865"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.593618 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.595302 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerStarted","Data":"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.597108 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dhcsq" event={"ID":"cb8eb50b-2bea-43d0-b0b6-698bc3709b1d","Type":"ContainerStarted","Data":"70bf669d8ea95932ef19dcabe717ccefb473acfe5325e432a941b2f775fad782"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.599227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2235e648-6ec4-4d98-a879-46f4f56b93e0","Type":"ContainerStarted","Data":"44a605d6540cb26690ce708ee164a412e3a3d24d335f6708fe79f61e45b23358"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.600748 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" event={"ID":"1b3c25ba-4426-45b4-8f79-95fd0e07823b","Type":"ContainerStarted","Data":"2c69f785648cab553b7e1c7dde06a228e0f4bcfe696143ab2dce7e73c6d0e806"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.600973 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.602291 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" event={"ID":"a1a1861d-9755-4f0b-8644-37e0e35584e1","Type":"ContainerStarted","Data":"6d169de69b1a694f1c98642c642bb93b4e6cb6d57627b6f583826f187cadd3ab"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.603873 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"31cd92ea-0a03-4883-9d96-532a9d5c3bd0","Type":"ContainerStarted","Data":"b1f7ae8f49d0d1ad362cfa7258e28f73f769614d8d546945aa9fd3e6c66b9357"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.604062 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.605500 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c667706-daaf-4283-9ebb-64bae95b4914","Type":"ContainerStarted","Data":"1f1a0490164edb3c89c5b51a9d31aaa696d25d15d48c04bc6a713978cf03fb3b"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.608769 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" event={"ID":"39c749dc-99ca-45d4-b49a-3e8925e0230a","Type":"ContainerStarted","Data":"780675a00b70a772b187c5f948a57f006460f75cd8f426279dddd4b6dcd77cab"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.608940 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.611629 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51","Type":"ContainerStarted","Data":"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.612078 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.616036 4806 generic.go:334] "Generic (PLEG): container finished" podID="0ebac08b-471e-4b28-98fb-b9bab2e3f505" containerID="e6f57704c236607a1f5805303c8bee571b5a30dce36f115d24d21f0613ed50bf" exitCode=0 Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.617737 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-svmbm" event={"ID":"0ebac08b-471e-4b28-98fb-b9bab2e3f505","Type":"ContainerDied","Data":"e6f57704c236607a1f5805303c8bee571b5a30dce36f115d24d21f0613ed50bf"} Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.619891 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.621420 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.783191192 podStartE2EDuration="38.621401133s" podCreationTimestamp="2025-11-25 15:12:52 +0000 UTC" firstStartedPulling="2025-11-25 15:13:16.225800473 +0000 UTC m=+1228.877942884" lastFinishedPulling="2025-11-25 15:13:27.064010414 +0000 UTC m=+1239.716152825" observedRunningTime="2025-11-25 15:13:30.612611976 +0000 UTC m=+1243.264754397" watchObservedRunningTime="2025-11-25 15:13:30.621401133 +0000 UTC m=+1243.273543544" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.640210 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=9.22611105 podStartE2EDuration="48.640188302s" podCreationTimestamp="2025-11-25 15:12:42 +0000 UTC" firstStartedPulling="2025-11-25 15:12:43.577179452 +0000 UTC m=+1196.229321863" lastFinishedPulling="2025-11-25 15:13:22.991256694 +0000 UTC m=+1235.643399115" observedRunningTime="2025-11-25 15:13:30.63406924 +0000 UTC m=+1243.286211661" watchObservedRunningTime="2025-11-25 15:13:30.640188302 +0000 UTC m=+1243.292330713" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.650701 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=29.489172352 podStartE2EDuration="38.650679187s" podCreationTimestamp="2025-11-25 15:12:52 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.898914758 +0000 UTC m=+1230.551057169" lastFinishedPulling="2025-11-25 15:13:27.060421593 +0000 UTC m=+1239.712564004" observedRunningTime="2025-11-25 15:13:30.649463913 +0000 UTC m=+1243.301606334" watchObservedRunningTime="2025-11-25 15:13:30.650679187 +0000 UTC m=+1243.302821598" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.679077 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" podStartSLOduration=25.899044735 podStartE2EDuration="34.679054706s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.879569814 +0000 UTC m=+1230.531712225" lastFinishedPulling="2025-11-25 15:13:26.659579775 +0000 UTC m=+1239.311722196" observedRunningTime="2025-11-25 15:13:30.668771666 +0000 UTC m=+1243.320914087" watchObservedRunningTime="2025-11-25 15:13:30.679054706 +0000 UTC m=+1243.331197117" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.703551 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=25.957948415 podStartE2EDuration="35.703530384s" podCreationTimestamp="2025-11-25 15:12:55 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.315427181 +0000 UTC m=+1229.967569592" lastFinishedPulling="2025-11-25 15:13:27.06100915 +0000 UTC m=+1239.713151561" observedRunningTime="2025-11-25 15:13:30.695264802 +0000 UTC m=+1243.347407213" watchObservedRunningTime="2025-11-25 15:13:30.703530384 +0000 UTC m=+1243.355672795" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.753766 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.62890534 podStartE2EDuration="46.753748067s" podCreationTimestamp="2025-11-25 15:12:44 +0000 UTC" firstStartedPulling="2025-11-25 15:12:51.008767655 +0000 UTC m=+1203.660910066" lastFinishedPulling="2025-11-25 15:13:27.133610382 +0000 UTC m=+1239.785752793" observedRunningTime="2025-11-25 15:13:30.743932021 +0000 UTC m=+1243.396074432" watchObservedRunningTime="2025-11-25 15:13:30.753748067 +0000 UTC m=+1243.405890478" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.836547 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-gbg2h" podStartSLOduration=25.452737067 podStartE2EDuration="34.836520916s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.759591008 +0000 UTC m=+1230.411733419" lastFinishedPulling="2025-11-25 15:13:27.143374857 +0000 UTC m=+1239.795517268" observedRunningTime="2025-11-25 15:13:30.821653698 +0000 UTC m=+1243.473796149" watchObservedRunningTime="2025-11-25 15:13:30.836520916 +0000 UTC m=+1243.488663327" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.838146 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dhcsq" podStartSLOduration=27.547607667 podStartE2EDuration="36.838137802s" podCreationTimestamp="2025-11-25 15:12:54 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.89827133 +0000 UTC m=+1230.550413741" lastFinishedPulling="2025-11-25 15:13:27.188801455 +0000 UTC m=+1239.840943876" observedRunningTime="2025-11-25 15:13:30.797044415 +0000 UTC m=+1243.449186826" watchObservedRunningTime="2025-11-25 15:13:30.838137802 +0000 UTC m=+1243.490280213" Nov 25 15:13:30 crc kubenswrapper[4806]: I1125 15:13:30.853089 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" podStartSLOduration=25.653054653 podStartE2EDuration="34.853067782s" podCreationTimestamp="2025-11-25 15:12:56 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.890499701 +0000 UTC m=+1230.542642112" lastFinishedPulling="2025-11-25 15:13:27.09051283 +0000 UTC m=+1239.742655241" observedRunningTime="2025-11-25 15:13:30.838920164 +0000 UTC m=+1243.491062595" watchObservedRunningTime="2025-11-25 15:13:30.853067782 +0000 UTC m=+1243.505210193" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.042044 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.113259 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.115387 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.115611 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.122552 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.178606 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.178692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.178715 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9brw\" (UniqueName: \"kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.178870 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.285197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9brw\" (UniqueName: \"kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.285813 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.285887 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.285937 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.287245 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.287912 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.288461 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.312010 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.327870 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9brw\" (UniqueName: \"kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw\") pod \"dnsmasq-dns-7fd796d7df-q6dxp\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.365760 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.375741 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.380894 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.387905 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.388380 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.388437 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.388512 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg85h\" (UniqueName: \"kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.388543 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.388567 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.456774 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.491436 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.491489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.491546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg85h\" (UniqueName: \"kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.491568 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.491596 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.492554 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.492593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.493223 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.493381 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.512881 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg85h\" (UniqueName: \"kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h\") pod \"dnsmasq-dns-86db49b7ff-9wwsx\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.621422 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.628459 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.628520 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mn6ms" event={"ID":"64d9b559-93b6-4a15-a497-a7caf051dabc","Type":"ContainerDied","Data":"bff71c6588bfcd1d8e23bbd147a3774625ba3d3e0bcc44626b11076857c8adfa"} Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.649726 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-svmbm" event={"ID":"0ebac08b-471e-4b28-98fb-b9bab2e3f505","Type":"ContainerStarted","Data":"443ef93907ac0ca31228785312d05839cff38589e6a7e52835b3606a838787b8"} Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.651461 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.672143 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-76cc998948-fxwbg" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.702767 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.737304 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.800725 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config\") pod \"64d9b559-93b6-4a15-a497-a7caf051dabc\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.801036 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc\") pod \"3b99dd44-ae01-4f09-975a-77eb055e4813\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.801090 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc\") pod \"64d9b559-93b6-4a15-a497-a7caf051dabc\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.801195 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jnt\" (UniqueName: \"kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt\") pod \"64d9b559-93b6-4a15-a497-a7caf051dabc\" (UID: \"64d9b559-93b6-4a15-a497-a7caf051dabc\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.801231 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2vkn\" (UniqueName: \"kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn\") pod \"3b99dd44-ae01-4f09-975a-77eb055e4813\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.801309 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config\") pod \"3b99dd44-ae01-4f09-975a-77eb055e4813\" (UID: \"3b99dd44-ae01-4f09-975a-77eb055e4813\") " Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.803535 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config" (OuterVolumeSpecName: "config") pod "64d9b559-93b6-4a15-a497-a7caf051dabc" (UID: "64d9b559-93b6-4a15-a497-a7caf051dabc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.804417 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3b99dd44-ae01-4f09-975a-77eb055e4813" (UID: "3b99dd44-ae01-4f09-975a-77eb055e4813"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.804765 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64d9b559-93b6-4a15-a497-a7caf051dabc" (UID: "64d9b559-93b6-4a15-a497-a7caf051dabc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.810811 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config" (OuterVolumeSpecName: "config") pod "3b99dd44-ae01-4f09-975a-77eb055e4813" (UID: "3b99dd44-ae01-4f09-975a-77eb055e4813"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.818794 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt" (OuterVolumeSpecName: "kube-api-access-w8jnt") pod "64d9b559-93b6-4a15-a497-a7caf051dabc" (UID: "64d9b559-93b6-4a15-a497-a7caf051dabc"). InnerVolumeSpecName "kube-api-access-w8jnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.818827 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn" (OuterVolumeSpecName: "kube-api-access-w2vkn") pod "3b99dd44-ae01-4f09-975a-77eb055e4813" (UID: "3b99dd44-ae01-4f09-975a-77eb055e4813"). InnerVolumeSpecName "kube-api-access-w2vkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903826 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903861 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903873 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64d9b559-93b6-4a15-a497-a7caf051dabc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903882 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jnt\" (UniqueName: \"kubernetes.io/projected/64d9b559-93b6-4a15-a497-a7caf051dabc-kube-api-access-w8jnt\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903909 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2vkn\" (UniqueName: \"kubernetes.io/projected/3b99dd44-ae01-4f09-975a-77eb055e4813-kube-api-access-w2vkn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4806]: I1125 15:13:31.903919 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b99dd44-ae01-4f09-975a-77eb055e4813-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.004781 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.014707 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mn6ms"] Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.112233 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d9b559-93b6-4a15-a497-a7caf051dabc" path="/var/lib/kubelet/pods/64d9b559-93b6-4a15-a497-a7caf051dabc/volumes" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.113075 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:32 crc kubenswrapper[4806]: W1125 15:13:32.116952 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice/crio-dcf0d664bc089eca110c1a9995c8b46faf28bebe73a8c36c2c3f20f1e056fa22 WatchSource:0}: Error finding container dcf0d664bc089eca110c1a9995c8b46faf28bebe73a8c36c2c3f20f1e056fa22: Status 404 returned error can't find the container with id dcf0d664bc089eca110c1a9995c8b46faf28bebe73a8c36c2c3f20f1e056fa22 Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.277120 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.656542 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.706029 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" event={"ID":"78bdea31-bfb2-4f3f-b1ff-fb246b432b84","Type":"ContainerStarted","Data":"9eca85cfab23c72fc26676d70317880db26c8211391c8d469c560b93fa1caaa8"} Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.728294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" event={"ID":"3b99dd44-ae01-4f09-975a-77eb055e4813","Type":"ContainerDied","Data":"e3522db4af2e9a22a3a3a6f3980c0becad94a3248e7df2fca5fa840691f1d92e"} Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.728332 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-njrj8" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.742724 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" event={"ID":"3a870706-cfbf-4cea-a993-238c06b56be3","Type":"ContainerStarted","Data":"dcf0d664bc089eca110c1a9995c8b46faf28bebe73a8c36c2c3f20f1e056fa22"} Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.762071 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-svmbm" event={"ID":"0ebac08b-471e-4b28-98fb-b9bab2e3f505","Type":"ContainerStarted","Data":"c88e39c797e166a5cbf8873d7d68400eb650cd4eb441e92df9b4711cdef5c248"} Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.763408 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.792938 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.792991 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-njrj8"] Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.852372 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.880673 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-svmbm" podStartSLOduration=14.820770234 podStartE2EDuration="44.8806522s" podCreationTimestamp="2025-11-25 15:12:48 +0000 UTC" firstStartedPulling="2025-11-25 15:12:58.035908468 +0000 UTC m=+1210.688050889" lastFinishedPulling="2025-11-25 15:13:28.095790444 +0000 UTC m=+1240.747932855" observedRunningTime="2025-11-25 15:13:32.830007145 +0000 UTC m=+1245.482149556" watchObservedRunningTime="2025-11-25 15:13:32.8806522 +0000 UTC m=+1245.532794611" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.924391 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 15:13:32 crc kubenswrapper[4806]: I1125 15:13:32.964767 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 15:13:33 crc kubenswrapper[4806]: I1125 15:13:33.654410 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 15:13:33 crc kubenswrapper[4806]: I1125 15:13:33.768860 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:13:33 crc kubenswrapper[4806]: I1125 15:13:33.768904 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 15:13:34 crc kubenswrapper[4806]: I1125 15:13:34.103013 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b99dd44-ae01-4f09-975a-77eb055e4813" path="/var/lib/kubelet/pods/3b99dd44-ae01-4f09-975a-77eb055e4813/volumes" Nov 25 15:13:34 crc kubenswrapper[4806]: I1125 15:13:34.825565 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 15:13:34 crc kubenswrapper[4806]: I1125 15:13:34.825792 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.178592 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.180836 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.191430 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.191811 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.192003 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vtbm8" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.192181 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.207949 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.280936 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.280983 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.281005 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.281045 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkhl7\" (UniqueName: \"kubernetes.io/projected/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-kube-api-access-qkhl7\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.281099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.281130 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-config\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.281154 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-scripts\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.382994 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-config\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383390 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-scripts\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383444 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383466 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383486 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.383518 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkhl7\" (UniqueName: \"kubernetes.io/projected/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-kube-api-access-qkhl7\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.384676 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.384800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-config\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.385113 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-scripts\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.388654 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.388752 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.391510 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.391535 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.404867 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkhl7\" (UniqueName: \"kubernetes.io/projected/fb15262a-cd0a-45e1-b1c4-9d5221f2e707-kube-api-access-qkhl7\") pod \"ovn-northd-0\" (UID: \"fb15262a-cd0a-45e1-b1c4-9d5221f2e707\") " pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.507668 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.788279 4806 generic.go:334] "Generic (PLEG): container finished" podID="82ed644a-fbd9-4ccc-a348-37293a1795f5" containerID="ab5c614ccfe699069e3e9092ebeaff187b70b09d82a1e3c7e2fc62342b9f3838" exitCode=0 Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.788454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"82ed644a-fbd9-4ccc-a348-37293a1795f5","Type":"ContainerDied","Data":"ab5c614ccfe699069e3e9092ebeaff187b70b09d82a1e3c7e2fc62342b9f3838"} Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.798476 4806 generic.go:334] "Generic (PLEG): container finished" podID="0c667706-daaf-4283-9ebb-64bae95b4914" containerID="1f1a0490164edb3c89c5b51a9d31aaa696d25d15d48c04bc6a713978cf03fb3b" exitCode=0 Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.798575 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c667706-daaf-4283-9ebb-64bae95b4914","Type":"ContainerDied","Data":"1f1a0490164edb3c89c5b51a9d31aaa696d25d15d48c04bc6a713978cf03fb3b"} Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.803652 4806 generic.go:334] "Generic (PLEG): container finished" podID="fc946fac-46fb-45c0-8a69-2e481bf9d947" containerID="06c7add753d9656b03ddf3ef2ecefdf3fe27cff4663650a99bae5b9716daa2a4" exitCode=0 Nov 25 15:13:35 crc kubenswrapper[4806]: I1125 15:13:35.803835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc946fac-46fb-45c0-8a69-2e481bf9d947","Type":"ContainerDied","Data":"06c7add753d9656b03ddf3ef2ecefdf3fe27cff4663650a99bae5b9716daa2a4"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.063681 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:13:36 crc kubenswrapper[4806]: W1125 15:13:36.075437 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb15262a_cd0a_45e1_b1c4_9d5221f2e707.slice/crio-d08381b3b62675f076134f80f45647c66395f527bb10c2dc55079b7aa70a1112 WatchSource:0}: Error finding container d08381b3b62675f076134f80f45647c66395f527bb10c2dc55079b7aa70a1112: Status 404 returned error can't find the container with id d08381b3b62675f076134f80f45647c66395f527bb10c2dc55079b7aa70a1112 Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.816145 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"fc946fac-46fb-45c0-8a69-2e481bf9d947","Type":"ContainerStarted","Data":"fc349dbd743c2a819cd4b80c5b2de40736cf05679ffb647e888c95644d3e39c2"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.819294 4806 generic.go:334] "Generic (PLEG): container finished" podID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" exitCode=0 Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.819396 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerDied","Data":"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.821567 4806 generic.go:334] "Generic (PLEG): container finished" podID="3a870706-cfbf-4cea-a993-238c06b56be3" containerID="afc6fff0a942fe1bb1cb0171f70cb77948f90e51b5a640d92359a2571e531419" exitCode=0 Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.821629 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" event={"ID":"3a870706-cfbf-4cea-a993-238c06b56be3","Type":"ContainerDied","Data":"afc6fff0a942fe1bb1cb0171f70cb77948f90e51b5a640d92359a2571e531419"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.823786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fb15262a-cd0a-45e1-b1c4-9d5221f2e707","Type":"ContainerStarted","Data":"d08381b3b62675f076134f80f45647c66395f527bb10c2dc55079b7aa70a1112"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.826785 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l6mv2" event={"ID":"c90d07c6-4f04-48d1-ae1f-bb15f60ba44b","Type":"ContainerStarted","Data":"1e1a811d35df962e02a676c519d1df769c6094552e7fd878dcb97dc533bd070f"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.827019 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-l6mv2" Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.828931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0c667706-daaf-4283-9ebb-64bae95b4914","Type":"ContainerStarted","Data":"4702506c8433b461cdf915969da0a143bd9b02aa420aebd02bdd85f865464de8"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.830959 4806 generic.go:334] "Generic (PLEG): container finished" podID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerID="2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c" exitCode=0 Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.831006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" event={"ID":"78bdea31-bfb2-4f3f-b1ff-fb246b432b84","Type":"ContainerDied","Data":"2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c"} Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.841884 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=12.280647055 podStartE2EDuration="57.841868171s" podCreationTimestamp="2025-11-25 15:12:39 +0000 UTC" firstStartedPulling="2025-11-25 15:12:41.529274353 +0000 UTC m=+1194.181416764" lastFinishedPulling="2025-11-25 15:13:27.090495469 +0000 UTC m=+1239.742637880" observedRunningTime="2025-11-25 15:13:36.836466619 +0000 UTC m=+1249.488609040" watchObservedRunningTime="2025-11-25 15:13:36.841868171 +0000 UTC m=+1249.494010582" Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.853779 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-l6mv2" podStartSLOduration=5.508524788 podStartE2EDuration="48.853762866s" podCreationTimestamp="2025-11-25 15:12:48 +0000 UTC" firstStartedPulling="2025-11-25 15:12:52.427445481 +0000 UTC m=+1205.079587892" lastFinishedPulling="2025-11-25 15:13:35.772683559 +0000 UTC m=+1248.424825970" observedRunningTime="2025-11-25 15:13:36.853008035 +0000 UTC m=+1249.505150456" watchObservedRunningTime="2025-11-25 15:13:36.853762866 +0000 UTC m=+1249.505905277" Nov 25 15:13:36 crc kubenswrapper[4806]: I1125 15:13:36.939072 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=13.370675789 podStartE2EDuration="56.939054466s" podCreationTimestamp="2025-11-25 15:12:40 +0000 UTC" firstStartedPulling="2025-11-25 15:12:43.492577401 +0000 UTC m=+1196.144719812" lastFinishedPulling="2025-11-25 15:13:27.060956078 +0000 UTC m=+1239.713098489" observedRunningTime="2025-11-25 15:13:36.937687847 +0000 UTC m=+1249.589830278" watchObservedRunningTime="2025-11-25 15:13:36.939054466 +0000 UTC m=+1249.591196877" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.733948 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.845931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" event={"ID":"3a870706-cfbf-4cea-a993-238c06b56be3","Type":"ContainerStarted","Data":"7a74ceb9310e1d598ca1b90a7fb824ce2b93f142b1fe8a40b51b60e76a87f05a"} Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.846077 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.848953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fb15262a-cd0a-45e1-b1c4-9d5221f2e707","Type":"ContainerStarted","Data":"2671102f0ae3ce627d1b12738b56eeb56c156619f8d881142c716b43423eb1d2"} Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.849029 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fb15262a-cd0a-45e1-b1c4-9d5221f2e707","Type":"ContainerStarted","Data":"81c724b0b3aefa2b3bcdb4139d8d1587beea09b6328147b7ef7e75a8240d60c9"} Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.849668 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.852835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" event={"ID":"78bdea31-bfb2-4f3f-b1ff-fb246b432b84","Type":"ContainerStarted","Data":"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8"} Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.908423 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" podStartSLOduration=3.454821109 podStartE2EDuration="6.908403628s" podCreationTimestamp="2025-11-25 15:13:31 +0000 UTC" firstStartedPulling="2025-11-25 15:13:32.119767521 +0000 UTC m=+1244.771909932" lastFinishedPulling="2025-11-25 15:13:35.57335004 +0000 UTC m=+1248.225492451" observedRunningTime="2025-11-25 15:13:37.862429475 +0000 UTC m=+1250.514571886" watchObservedRunningTime="2025-11-25 15:13:37.908403628 +0000 UTC m=+1250.560546039" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.910485 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" podStartSLOduration=3.558691262 podStartE2EDuration="6.910471677s" podCreationTimestamp="2025-11-25 15:13:31 +0000 UTC" firstStartedPulling="2025-11-25 15:13:32.282652504 +0000 UTC m=+1244.934794915" lastFinishedPulling="2025-11-25 15:13:35.634432899 +0000 UTC m=+1248.286575330" observedRunningTime="2025-11-25 15:13:37.891548144 +0000 UTC m=+1250.543690565" watchObservedRunningTime="2025-11-25 15:13:37.910471677 +0000 UTC m=+1250.562614088" Nov 25 15:13:37 crc kubenswrapper[4806]: I1125 15:13:37.928589 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.89905825 podStartE2EDuration="2.928572146s" podCreationTimestamp="2025-11-25 15:13:35 +0000 UTC" firstStartedPulling="2025-11-25 15:13:36.081051265 +0000 UTC m=+1248.733193676" lastFinishedPulling="2025-11-25 15:13:37.110565161 +0000 UTC m=+1249.762707572" observedRunningTime="2025-11-25 15:13:37.914486329 +0000 UTC m=+1250.566628760" watchObservedRunningTime="2025-11-25 15:13:37.928572146 +0000 UTC m=+1250.580714557" Nov 25 15:13:38 crc kubenswrapper[4806]: I1125 15:13:38.861618 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:39 crc kubenswrapper[4806]: I1125 15:13:39.885391 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"82ed644a-fbd9-4ccc-a348-37293a1795f5","Type":"ContainerStarted","Data":"c71dfac36eb41d6f52a8fe644bb16edd9c9bd5f2b3c42519ee9d3d30335f8361"} Nov 25 15:13:40 crc kubenswrapper[4806]: I1125 15:13:40.869751 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 15:13:40 crc kubenswrapper[4806]: I1125 15:13:40.870092 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 15:13:41 crc kubenswrapper[4806]: I1125 15:13:41.136231 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 15:13:41 crc kubenswrapper[4806]: I1125 15:13:41.235266 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 15:13:41 crc kubenswrapper[4806]: I1125 15:13:41.910476 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"82ed644a-fbd9-4ccc-a348-37293a1795f5","Type":"ContainerStarted","Data":"30c8841c3c1b68899ee474a38dcacf120d073c67d22b85132c58c162a652a95c"} Nov 25 15:13:41 crc kubenswrapper[4806]: I1125 15:13:41.937338 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=10.031528478 podStartE2EDuration="56.937296394s" podCreationTimestamp="2025-11-25 15:12:45 +0000 UTC" firstStartedPulling="2025-11-25 15:12:52.109436343 +0000 UTC m=+1204.761578754" lastFinishedPulling="2025-11-25 15:13:39.015204249 +0000 UTC m=+1251.667346670" observedRunningTime="2025-11-25 15:13:41.933482617 +0000 UTC m=+1254.585625038" watchObservedRunningTime="2025-11-25 15:13:41.937296394 +0000 UTC m=+1254.589438805" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.426038 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-18c6-account-create-cchzq"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.427773 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.430939 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.439637 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-18c6-account-create-cchzq"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.480385 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kqrd2"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.482021 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.488017 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kqrd2"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.520866 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.521130 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tscx\" (UniqueName: \"kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.607344 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.607405 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.611154 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-qgsv9"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.612431 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.622859 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.623001 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.623035 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2cd\" (UniqueName: \"kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.623141 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tscx\" (UniqueName: \"kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.624578 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.643203 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qgsv9"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.656010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tscx\" (UniqueName: \"kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx\") pod \"keystone-18c6-account-create-cchzq\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.724493 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2cd\" (UniqueName: \"kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.724581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.724717 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwr75\" (UniqueName: \"kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.724771 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.730166 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.749230 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.750109 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2cd\" (UniqueName: \"kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd\") pod \"keystone-db-create-kqrd2\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.755461 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d2f7-account-create-6rgcw"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.757048 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.759679 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.765869 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2f7-account-create-6rgcw"] Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.804430 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.826378 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.826450 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwr75\" (UniqueName: \"kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.826486 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h54gz\" (UniqueName: \"kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.826594 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.827898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.845953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwr75\" (UniqueName: \"kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75\") pod \"placement-db-create-qgsv9\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.922475 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.928545 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h54gz\" (UniqueName: \"kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.928718 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.929125 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.929576 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.937346 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:42 crc kubenswrapper[4806]: I1125 15:13:42.947657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h54gz\" (UniqueName: \"kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz\") pod \"placement-d2f7-account-create-6rgcw\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:43 crc kubenswrapper[4806]: I1125 15:13:43.111078 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:43 crc kubenswrapper[4806]: I1125 15:13:43.224111 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 15:13:43 crc kubenswrapper[4806]: I1125 15:13:43.335799 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.038625 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.039432 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="dnsmasq-dns" containerID="cri-o://7a74ceb9310e1d598ca1b90a7fb824ce2b93f142b1fe8a40b51b60e76a87f05a" gracePeriod=10 Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.047049 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.070278 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.071967 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.083000 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.161533 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-qgsv9"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.185851 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6js4\" (UniqueName: \"kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.185952 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.186029 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.186068 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.186401 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.292822 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.293721 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6js4\" (UniqueName: \"kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.293786 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.293878 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.293908 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.295027 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.295721 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.296626 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.297283 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.311557 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kqrd2"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.321838 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6js4\" (UniqueName: \"kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4\") pod \"dnsmasq-dns-698758b865-pxfdb\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.412254 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.467440 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2f7-account-create-6rgcw"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.479062 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-18c6-account-create-cchzq"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.952853 4806 generic.go:334] "Generic (PLEG): container finished" podID="3a870706-cfbf-4cea-a993-238c06b56be3" containerID="7a74ceb9310e1d598ca1b90a7fb824ce2b93f142b1fe8a40b51b60e76a87f05a" exitCode=0 Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.952878 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" event={"ID":"3a870706-cfbf-4cea-a993-238c06b56be3","Type":"ContainerDied","Data":"7a74ceb9310e1d598ca1b90a7fb824ce2b93f142b1fe8a40b51b60e76a87f05a"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.955770 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.958977 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f7-account-create-6rgcw" event={"ID":"94b13266-e80b-4462-b7fa-04b5043e53e1","Type":"ContainerStarted","Data":"7dd6b5cd5f55ebd9a80ea781b536b148995d40ed2e05df5588478761a5554679"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.959041 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f7-account-create-6rgcw" event={"ID":"94b13266-e80b-4462-b7fa-04b5043e53e1","Type":"ContainerStarted","Data":"cc122d0e669149c56dfbf4e1f2781f105416d3f3f0c14855fff036ab6a7bce78"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.962255 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kqrd2" event={"ID":"ce1e02da-f4bb-4165-b4fc-cf65955994ae","Type":"ContainerStarted","Data":"1e2153b2ab05f8e43c1b85e49aaebc818222fcd6e34dce130b6c25633846fc60"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.962300 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kqrd2" event={"ID":"ce1e02da-f4bb-4165-b4fc-cf65955994ae","Type":"ContainerStarted","Data":"6a48c13ebf5b77b7b6c28518b050519169bf64811848751baad7f3fcb4622477"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.964811 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c6-account-create-cchzq" event={"ID":"5df1cd59-5e8a-49c9-af33-4547720713f0","Type":"ContainerStarted","Data":"947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.964843 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c6-account-create-cchzq" event={"ID":"5df1cd59-5e8a-49c9-af33-4547720713f0","Type":"ContainerStarted","Data":"9a1aa5ad21f2505075dc6fbea2ae99dcc5f41e5a3aa888aad805efcdbe1ce8d6"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.967609 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qgsv9" event={"ID":"59f31c89-0010-494d-a1d5-2db4958b10d6","Type":"ContainerStarted","Data":"f6cebcfc304fe6aec46892612797c6e415e5ff5ea49135e94c17e8ba009af731"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.967638 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qgsv9" event={"ID":"59f31c89-0010-494d-a1d5-2db4958b10d6","Type":"ContainerStarted","Data":"cacfef80f060ac40d205ae6eecbd0e2fe8a8a585d8a829c336bbe75a7eb9aec2"} Nov 25 15:13:45 crc kubenswrapper[4806]: I1125 15:13:45.983869 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-d2f7-account-create-6rgcw" podStartSLOduration=3.983850797 podStartE2EDuration="3.983850797s" podCreationTimestamp="2025-11-25 15:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:45.979240407 +0000 UTC m=+1258.631382828" watchObservedRunningTime="2025-11-25 15:13:45.983850797 +0000 UTC m=+1258.635993208" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.005965 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-qgsv9" podStartSLOduration=4.005942548 podStartE2EDuration="4.005942548s" podCreationTimestamp="2025-11-25 15:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:46.005005962 +0000 UTC m=+1258.657148383" watchObservedRunningTime="2025-11-25 15:13:46.005942548 +0000 UTC m=+1258.658084959" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.025256 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-kqrd2" podStartSLOduration=4.025235601 podStartE2EDuration="4.025235601s" podCreationTimestamp="2025-11-25 15:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:46.024541452 +0000 UTC m=+1258.676683883" watchObservedRunningTime="2025-11-25 15:13:46.025235601 +0000 UTC m=+1258.677378012" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.038265 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-18c6-account-create-cchzq" podStartSLOduration=4.038243957 podStartE2EDuration="4.038243957s" podCreationTimestamp="2025-11-25 15:13:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:46.03657324 +0000 UTC m=+1258.688715661" watchObservedRunningTime="2025-11-25 15:13:46.038243957 +0000 UTC m=+1258.690386388" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.138162 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.212239 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.213005 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.217688 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.217953 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.218067 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.219040 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rz9fn" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.234649 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-56cd74f89f-bs2h7" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.341490 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.341592 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-cache\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.341654 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-lock\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.341739 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.341769 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgrf\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-kube-api-access-tjgrf\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.421544 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wpqhp"] Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.422824 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.430814 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.442882 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.444514 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-cache\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.444561 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-lock\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.444636 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.444662 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgrf\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-kube-api-access-tjgrf\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.443033 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.443259 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.445472 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-cache\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.445507 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.445528 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.445546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-lock\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.455786 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wpqhp"] Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.456078 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.456119 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6d0a6805036219408ca9b60f64c67ecb882cfc357a3f91341a74af1cb187d521/globalmount\"" pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.456820 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:13:46.956791523 +0000 UTC m=+1259.608933934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.458746 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.472795 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-548665d79b-vt8jx" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.473918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgrf\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-kube-api-access-tjgrf\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.551104 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.551193 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnwfk\" (UniqueName: \"kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.553479 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.553599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.553635 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.553817 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.554351 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.569639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e95db56a-e7c3-4a07-8056-2fba7647bdb4\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.655983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656245 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656304 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656528 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656641 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.656748 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnwfk\" (UniqueName: \"kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.658448 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.659193 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.659571 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.662762 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.663041 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.672140 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.676171 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnwfk\" (UniqueName: \"kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk\") pod \"swift-ring-rebalance-wpqhp\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.704503 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.710149 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-779849886d-mzf6h" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.761816 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:13:46 crc kubenswrapper[4806]: I1125 15:13:46.963926 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.964462 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.964476 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:13:46 crc kubenswrapper[4806]: E1125 15:13:46.964516 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:13:47.964502828 +0000 UTC m=+1260.616645239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.005633 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.019697 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" event={"ID":"3a870706-cfbf-4cea-a993-238c06b56be3","Type":"ContainerDied","Data":"dcf0d664bc089eca110c1a9995c8b46faf28bebe73a8c36c2c3f20f1e056fa22"} Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.019753 4806 scope.go:117] "RemoveContainer" containerID="7a74ceb9310e1d598ca1b90a7fb824ce2b93f142b1fe8a40b51b60e76a87f05a" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.059598 4806 generic.go:334] "Generic (PLEG): container finished" podID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerID="b7e4ea7871c6858ccfa35f358a16e2a49f824439a48893e21369dc071b798dc9" exitCode=0 Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.059684 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pxfdb" event={"ID":"291eadf5-e50c-453d-aaf5-5fe457dae267","Type":"ContainerDied","Data":"b7e4ea7871c6858ccfa35f358a16e2a49f824439a48893e21369dc071b798dc9"} Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.059709 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pxfdb" event={"ID":"291eadf5-e50c-453d-aaf5-5fe457dae267","Type":"ContainerStarted","Data":"adb469f98b7215665ee71b941e37cbb224442fea665edb7225dd98c1b0b4cb68"} Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.069616 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config\") pod \"3a870706-cfbf-4cea-a993-238c06b56be3\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.069815 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb\") pod \"3a870706-cfbf-4cea-a993-238c06b56be3\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.069849 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc\") pod \"3a870706-cfbf-4cea-a993-238c06b56be3\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.069946 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9brw\" (UniqueName: \"kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw\") pod \"3a870706-cfbf-4cea-a993-238c06b56be3\" (UID: \"3a870706-cfbf-4cea-a993-238c06b56be3\") " Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.082286 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw" (OuterVolumeSpecName: "kube-api-access-h9brw") pod "3a870706-cfbf-4cea-a993-238c06b56be3" (UID: "3a870706-cfbf-4cea-a993-238c06b56be3"). InnerVolumeSpecName "kube-api-access-h9brw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.100591 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerStarted","Data":"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037"} Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.111218 4806 generic.go:334] "Generic (PLEG): container finished" podID="59f31c89-0010-494d-a1d5-2db4958b10d6" containerID="f6cebcfc304fe6aec46892612797c6e415e5ff5ea49135e94c17e8ba009af731" exitCode=0 Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.115049 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qgsv9" event={"ID":"59f31c89-0010-494d-a1d5-2db4958b10d6","Type":"ContainerDied","Data":"f6cebcfc304fe6aec46892612797c6e415e5ff5ea49135e94c17e8ba009af731"} Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.168347 4806 scope.go:117] "RemoveContainer" containerID="afc6fff0a942fe1bb1cb0171f70cb77948f90e51b5a640d92359a2571e531419" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.187157 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9brw\" (UniqueName: \"kubernetes.io/projected/3a870706-cfbf-4cea-a993-238c06b56be3-kube-api-access-h9brw\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.225479 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a870706-cfbf-4cea-a993-238c06b56be3" (UID: "3a870706-cfbf-4cea-a993-238c06b56be3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.247123 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config" (OuterVolumeSpecName: "config") pod "3a870706-cfbf-4cea-a993-238c06b56be3" (UID: "3a870706-cfbf-4cea-a993-238c06b56be3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.292634 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.292673 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.325043 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a870706-cfbf-4cea-a993-238c06b56be3" (UID: "3a870706-cfbf-4cea-a993-238c06b56be3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.394721 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a870706-cfbf-4cea-a993-238c06b56be3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.452811 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="cdc49832-6f51-4954-ab25-3f84f6956d1f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:13:47 crc kubenswrapper[4806]: E1125 15:13:47.501581 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce1e02da_f4bb_4165_b4fc_cf65955994ae.slice/crio-conmon-1e2153b2ab05f8e43c1b85e49aaebc818222fcd6e34dce130b6c25633846fc60.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df1cd59_5e8a_49c9_af33_4547720713f0.slice/crio-947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5df1cd59_5e8a_49c9_af33_4547720713f0.slice/crio-conmon-947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.510837 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wpqhp"] Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.523398 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.765581 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.880691 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-wcr7b"] Nov 25 15:13:47 crc kubenswrapper[4806]: E1125 15:13:47.881237 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="dnsmasq-dns" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.881263 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="dnsmasq-dns" Nov 25 15:13:47 crc kubenswrapper[4806]: E1125 15:13:47.881302 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="init" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.881311 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="init" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.881552 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" containerName="dnsmasq-dns" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.882417 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.890791 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wcr7b"] Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.915052 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.915201 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmqgg\" (UniqueName: \"kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.975032 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-0f2c-account-create-8xlqc"] Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.976412 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.981713 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 15:13:47 crc kubenswrapper[4806]: I1125 15:13:47.991832 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0f2c-account-create-8xlqc"] Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.016496 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flspn\" (UniqueName: \"kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.016597 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.016661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.016692 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.016754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmqgg\" (UniqueName: \"kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:48 crc kubenswrapper[4806]: E1125 15:13:48.017481 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:13:48 crc kubenswrapper[4806]: E1125 15:13:48.017527 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:13:48 crc kubenswrapper[4806]: E1125 15:13:48.017589 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:13:50.017567026 +0000 UTC m=+1262.669709497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.018023 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.044997 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmqgg\" (UniqueName: \"kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg\") pod \"glance-db-create-wcr7b\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.118204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flspn\" (UniqueName: \"kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.118305 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.119221 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.123225 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pxfdb" event={"ID":"291eadf5-e50c-453d-aaf5-5fe457dae267","Type":"ContainerStarted","Data":"cdd6a05c85039d7ad6147b2ac34e0a0d1ac12892e80d251e30c81fe0e810056d"} Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.123419 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.129575 4806 generic.go:334] "Generic (PLEG): container finished" podID="5df1cd59-5e8a-49c9-af33-4547720713f0" containerID="947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31" exitCode=0 Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.129639 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c6-account-create-cchzq" event={"ID":"5df1cd59-5e8a-49c9-af33-4547720713f0","Type":"ContainerDied","Data":"947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31"} Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.130847 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-q6dxp" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.132014 4806 generic.go:334] "Generic (PLEG): container finished" podID="94b13266-e80b-4462-b7fa-04b5043e53e1" containerID="7dd6b5cd5f55ebd9a80ea781b536b148995d40ed2e05df5588478761a5554679" exitCode=0 Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.132066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f7-account-create-6rgcw" event={"ID":"94b13266-e80b-4462-b7fa-04b5043e53e1","Type":"ContainerDied","Data":"7dd6b5cd5f55ebd9a80ea781b536b148995d40ed2e05df5588478761a5554679"} Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.133136 4806 generic.go:334] "Generic (PLEG): container finished" podID="ce1e02da-f4bb-4165-b4fc-cf65955994ae" containerID="1e2153b2ab05f8e43c1b85e49aaebc818222fcd6e34dce130b6c25633846fc60" exitCode=0 Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.133179 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kqrd2" event={"ID":"ce1e02da-f4bb-4165-b4fc-cf65955994ae","Type":"ContainerDied","Data":"1e2153b2ab05f8e43c1b85e49aaebc818222fcd6e34dce130b6c25633846fc60"} Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.133918 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpqhp" event={"ID":"998fc00a-139c-4c9a-9765-a445527be5aa","Type":"ContainerStarted","Data":"fdd4a87db855f09c5c89a0d4c2dbf19d8a95b11109b819365befc78e0ca9bdf0"} Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.149266 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flspn\" (UniqueName: \"kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn\") pod \"glance-0f2c-account-create-8xlqc\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.168010 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-pxfdb" podStartSLOduration=3.167989259 podStartE2EDuration="3.167989259s" podCreationTimestamp="2025-11-25 15:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:48.149730015 +0000 UTC m=+1260.801872426" watchObservedRunningTime="2025-11-25 15:13:48.167989259 +0000 UTC m=+1260.820131680" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.211013 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.248139 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.255921 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-q6dxp"] Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.301190 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.532790 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.636094 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwr75\" (UniqueName: \"kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75\") pod \"59f31c89-0010-494d-a1d5-2db4958b10d6\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.636385 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts\") pod \"59f31c89-0010-494d-a1d5-2db4958b10d6\" (UID: \"59f31c89-0010-494d-a1d5-2db4958b10d6\") " Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.637258 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "59f31c89-0010-494d-a1d5-2db4958b10d6" (UID: "59f31c89-0010-494d-a1d5-2db4958b10d6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.657911 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75" (OuterVolumeSpecName: "kube-api-access-mwr75") pod "59f31c89-0010-494d-a1d5-2db4958b10d6" (UID: "59f31c89-0010-494d-a1d5-2db4958b10d6"). InnerVolumeSpecName "kube-api-access-mwr75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.738565 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/59f31c89-0010-494d-a1d5-2db4958b10d6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.738663 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwr75\" (UniqueName: \"kubernetes.io/projected/59f31c89-0010-494d-a1d5-2db4958b10d6-kube-api-access-mwr75\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.796156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-wcr7b"] Nov 25 15:13:48 crc kubenswrapper[4806]: W1125 15:13:48.805256 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a1a10de_31c3_4413_b032_d10713c953dc.slice/crio-7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e WatchSource:0}: Error finding container 7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e: Status 404 returned error can't find the container with id 7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.923148 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-0f2c-account-create-8xlqc"] Nov 25 15:13:48 crc kubenswrapper[4806]: W1125 15:13:48.933433 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf5bc050_6822_4de5_923b_3e02b79d8429.slice/crio-eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0 WatchSource:0}: Error finding container eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0: Status 404 returned error can't find the container with id eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0 Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.934996 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.935117 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:48 crc kubenswrapper[4806]: I1125 15:13:48.940995 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 15:13:49 crc kubenswrapper[4806]: I1125 15:13:49.143679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f2c-account-create-8xlqc" event={"ID":"bf5bc050-6822-4de5-923b-3e02b79d8429","Type":"ContainerStarted","Data":"eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0"} Nov 25 15:13:49 crc kubenswrapper[4806]: I1125 15:13:49.153175 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wcr7b" event={"ID":"7a1a10de-31c3-4413-b032-d10713c953dc","Type":"ContainerStarted","Data":"7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e"} Nov 25 15:13:49 crc kubenswrapper[4806]: I1125 15:13:49.155720 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-qgsv9" event={"ID":"59f31c89-0010-494d-a1d5-2db4958b10d6","Type":"ContainerDied","Data":"cacfef80f060ac40d205ae6eecbd0e2fe8a8a585d8a829c336bbe75a7eb9aec2"} Nov 25 15:13:49 crc kubenswrapper[4806]: I1125 15:13:49.155786 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cacfef80f060ac40d205ae6eecbd0e2fe8a8a585d8a829c336bbe75a7eb9aec2" Nov 25 15:13:49 crc kubenswrapper[4806]: I1125 15:13:49.155978 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-qgsv9" Nov 25 15:13:50 crc kubenswrapper[4806]: E1125 15:13:50.094683 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:13:50 crc kubenswrapper[4806]: E1125 15:13:50.094898 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:13:50 crc kubenswrapper[4806]: E1125 15:13:50.094946 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:13:54.094929825 +0000 UTC m=+1266.747072236 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.095646 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.104900 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a870706-cfbf-4cea-a993-238c06b56be3" path="/var/lib/kubelet/pods/3a870706-cfbf-4cea-a993-238c06b56be3/volumes" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.167722 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerStarted","Data":"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184"} Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.169536 4806 generic.go:334] "Generic (PLEG): container finished" podID="bf5bc050-6822-4de5-923b-3e02b79d8429" containerID="19a6fa7a843252997e2005e4df582751e52d23566f1ce16e60ea9b20b8465703" exitCode=0 Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.169607 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f2c-account-create-8xlqc" event={"ID":"bf5bc050-6822-4de5-923b-3e02b79d8429","Type":"ContainerDied","Data":"19a6fa7a843252997e2005e4df582751e52d23566f1ce16e60ea9b20b8465703"} Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.172417 4806 generic.go:334] "Generic (PLEG): container finished" podID="7a1a10de-31c3-4413-b032-d10713c953dc" containerID="e62c96193e09b1f729f09e6c5235cf6da512c6a9ee464384eae9e55a5fd5890a" exitCode=0 Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.172460 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wcr7b" event={"ID":"7a1a10de-31c3-4413-b032-d10713c953dc","Type":"ContainerDied","Data":"e62c96193e09b1f729f09e6c5235cf6da512c6a9ee464384eae9e55a5fd5890a"} Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.575536 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.813857 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.829190 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909247 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts\") pod \"5df1cd59-5e8a-49c9-af33-4547720713f0\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909343 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx2cd\" (UniqueName: \"kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd\") pod \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909533 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tscx\" (UniqueName: \"kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx\") pod \"5df1cd59-5e8a-49c9-af33-4547720713f0\" (UID: \"5df1cd59-5e8a-49c9-af33-4547720713f0\") " Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909557 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts\") pod \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\" (UID: \"ce1e02da-f4bb-4165-b4fc-cf65955994ae\") " Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909797 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5df1cd59-5e8a-49c9-af33-4547720713f0" (UID: "5df1cd59-5e8a-49c9-af33-4547720713f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.909975 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce1e02da-f4bb-4165-b4fc-cf65955994ae" (UID: "ce1e02da-f4bb-4165-b4fc-cf65955994ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.910341 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce1e02da-f4bb-4165-b4fc-cf65955994ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.910363 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df1cd59-5e8a-49c9-af33-4547720713f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.915300 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx" (OuterVolumeSpecName: "kube-api-access-6tscx") pod "5df1cd59-5e8a-49c9-af33-4547720713f0" (UID: "5df1cd59-5e8a-49c9-af33-4547720713f0"). InnerVolumeSpecName "kube-api-access-6tscx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:50 crc kubenswrapper[4806]: I1125 15:13:50.920514 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd" (OuterVolumeSpecName: "kube-api-access-hx2cd") pod "ce1e02da-f4bb-4165-b4fc-cf65955994ae" (UID: "ce1e02da-f4bb-4165-b4fc-cf65955994ae"). InnerVolumeSpecName "kube-api-access-hx2cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.011873 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tscx\" (UniqueName: \"kubernetes.io/projected/5df1cd59-5e8a-49c9-af33-4547720713f0-kube-api-access-6tscx\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.011929 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx2cd\" (UniqueName: \"kubernetes.io/projected/ce1e02da-f4bb-4165-b4fc-cf65955994ae-kube-api-access-hx2cd\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.183872 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-18c6-account-create-cchzq" event={"ID":"5df1cd59-5e8a-49c9-af33-4547720713f0","Type":"ContainerDied","Data":"9a1aa5ad21f2505075dc6fbea2ae99dcc5f41e5a3aa888aad805efcdbe1ce8d6"} Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.183907 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-18c6-account-create-cchzq" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.183914 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1aa5ad21f2505075dc6fbea2ae99dcc5f41e5a3aa888aad805efcdbe1ce8d6" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.190019 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kqrd2" event={"ID":"ce1e02da-f4bb-4165-b4fc-cf65955994ae","Type":"ContainerDied","Data":"6a48c13ebf5b77b7b6c28518b050519169bf64811848751baad7f3fcb4622477"} Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.190356 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a48c13ebf5b77b7b6c28518b050519169bf64811848751baad7f3fcb4622477" Nov 25 15:13:51 crc kubenswrapper[4806]: I1125 15:13:51.190180 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kqrd2" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.554002 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.647999 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts\") pod \"94b13266-e80b-4462-b7fa-04b5043e53e1\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.648213 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h54gz\" (UniqueName: \"kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz\") pod \"94b13266-e80b-4462-b7fa-04b5043e53e1\" (UID: \"94b13266-e80b-4462-b7fa-04b5043e53e1\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.648817 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94b13266-e80b-4462-b7fa-04b5043e53e1" (UID: "94b13266-e80b-4462-b7fa-04b5043e53e1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.656942 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz" (OuterVolumeSpecName: "kube-api-access-h54gz") pod "94b13266-e80b-4462-b7fa-04b5043e53e1" (UID: "94b13266-e80b-4462-b7fa-04b5043e53e1"). InnerVolumeSpecName "kube-api-access-h54gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.659037 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.750611 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmqgg\" (UniqueName: \"kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg\") pod \"7a1a10de-31c3-4413-b032-d10713c953dc\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.751094 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts\") pod \"7a1a10de-31c3-4413-b032-d10713c953dc\" (UID: \"7a1a10de-31c3-4413-b032-d10713c953dc\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.751560 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a1a10de-31c3-4413-b032-d10713c953dc" (UID: "7a1a10de-31c3-4413-b032-d10713c953dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.751733 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h54gz\" (UniqueName: \"kubernetes.io/projected/94b13266-e80b-4462-b7fa-04b5043e53e1-kube-api-access-h54gz\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.751838 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94b13266-e80b-4462-b7fa-04b5043e53e1-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.755138 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg" (OuterVolumeSpecName: "kube-api-access-hmqgg") pod "7a1a10de-31c3-4413-b032-d10713c953dc" (UID: "7a1a10de-31c3-4413-b032-d10713c953dc"). InnerVolumeSpecName "kube-api-access-hmqgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.834432 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.853430 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a1a10de-31c3-4413-b032-d10713c953dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.853659 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmqgg\" (UniqueName: \"kubernetes.io/projected/7a1a10de-31c3-4413-b032-d10713c953dc-kube-api-access-hmqgg\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.955073 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts\") pod \"bf5bc050-6822-4de5-923b-3e02b79d8429\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.955181 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flspn\" (UniqueName: \"kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn\") pod \"bf5bc050-6822-4de5-923b-3e02b79d8429\" (UID: \"bf5bc050-6822-4de5-923b-3e02b79d8429\") " Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.956069 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf5bc050-6822-4de5-923b-3e02b79d8429" (UID: "bf5bc050-6822-4de5-923b-3e02b79d8429"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:52 crc kubenswrapper[4806]: I1125 15:13:52.981540 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn" (OuterVolumeSpecName: "kube-api-access-flspn") pod "bf5bc050-6822-4de5-923b-3e02b79d8429" (UID: "bf5bc050-6822-4de5-923b-3e02b79d8429"). InnerVolumeSpecName "kube-api-access-flspn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.057681 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf5bc050-6822-4de5-923b-3e02b79d8429-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.057720 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flspn\" (UniqueName: \"kubernetes.io/projected/bf5bc050-6822-4de5-923b-3e02b79d8429-kube-api-access-flspn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.213156 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-0f2c-account-create-8xlqc" event={"ID":"bf5bc050-6822-4de5-923b-3e02b79d8429","Type":"ContainerDied","Data":"eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0"} Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.213184 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-0f2c-account-create-8xlqc" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.213200 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6ecf7e723493eaf650d4e2d8ededa5eb44f7bfff3960ddaec4f09c721936c0" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.221140 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f7-account-create-6rgcw" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.221159 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f7-account-create-6rgcw" event={"ID":"94b13266-e80b-4462-b7fa-04b5043e53e1","Type":"ContainerDied","Data":"cc122d0e669149c56dfbf4e1f2781f105416d3f3f0c14855fff036ab6a7bce78"} Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.221199 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc122d0e669149c56dfbf4e1f2781f105416d3f3f0c14855fff036ab6a7bce78" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.224350 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-wcr7b" event={"ID":"7a1a10de-31c3-4413-b032-d10713c953dc","Type":"ContainerDied","Data":"7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e"} Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.224373 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d4a7f3dd07472f3ce3062ecf693fb8aec332e681058615fc555001717e26b6e" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.224390 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-wcr7b" Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.226951 4806 generic.go:334] "Generic (PLEG): container finished" podID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerID="007c3d7c4479c3e54daabc30a491b68f01e37829f6df5622da6a3a767e77053b" exitCode=0 Nov 25 15:13:53 crc kubenswrapper[4806]: I1125 15:13:53.227012 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerDied","Data":"007c3d7c4479c3e54daabc30a491b68f01e37829f6df5622da6a3a767e77053b"} Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.181124 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:13:54 crc kubenswrapper[4806]: E1125 15:13:54.181647 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:13:54 crc kubenswrapper[4806]: E1125 15:13:54.181688 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:13:54 crc kubenswrapper[4806]: E1125 15:13:54.181754 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:14:02.18173707 +0000 UTC m=+1274.833879481 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.238781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpqhp" event={"ID":"998fc00a-139c-4c9a-9765-a445527be5aa","Type":"ContainerStarted","Data":"6d45b8c1e3d8c641a6d16091bec7ce2f47fff105c105bccec460e0337f4b4409"} Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.241893 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerStarted","Data":"695e4a23d49efd364be9c42bd1fb0bb33b0cf8672424b953cac4023374d96669"} Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.242307 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.258114 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wpqhp" podStartSLOduration=2.555650237 podStartE2EDuration="8.258097969s" podCreationTimestamp="2025-11-25 15:13:46 +0000 UTC" firstStartedPulling="2025-11-25 15:13:47.511656113 +0000 UTC m=+1260.163798524" lastFinishedPulling="2025-11-25 15:13:53.214103845 +0000 UTC m=+1265.866246256" observedRunningTime="2025-11-25 15:13:54.254391935 +0000 UTC m=+1266.906534346" watchObservedRunningTime="2025-11-25 15:13:54.258097969 +0000 UTC m=+1266.910240380" Nov 25 15:13:54 crc kubenswrapper[4806]: I1125 15:13:54.282808 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=46.059661355 podStartE2EDuration="1m16.282767163s" podCreationTimestamp="2025-11-25 15:12:38 +0000 UTC" firstStartedPulling="2025-11-25 15:12:40.680459071 +0000 UTC m=+1193.332601482" lastFinishedPulling="2025-11-25 15:13:10.903564879 +0000 UTC m=+1223.555707290" observedRunningTime="2025-11-25 15:13:54.281094266 +0000 UTC m=+1266.933236697" watchObservedRunningTime="2025-11-25 15:13:54.282767163 +0000 UTC m=+1266.934909574" Nov 25 15:13:55 crc kubenswrapper[4806]: I1125 15:13:55.416815 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:13:55 crc kubenswrapper[4806]: I1125 15:13:55.488595 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:55 crc kubenswrapper[4806]: I1125 15:13:55.488850 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="dnsmasq-dns" containerID="cri-o://4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8" gracePeriod=10 Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.175678 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.269586 4806 generic.go:334] "Generic (PLEG): container finished" podID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerID="4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8" exitCode=0 Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.269628 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" event={"ID":"78bdea31-bfb2-4f3f-b1ff-fb246b432b84","Type":"ContainerDied","Data":"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8"} Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.269663 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" event={"ID":"78bdea31-bfb2-4f3f-b1ff-fb246b432b84","Type":"ContainerDied","Data":"9eca85cfab23c72fc26676d70317880db26c8211391c8d469c560b93fa1caaa8"} Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.269680 4806 scope.go:117] "RemoveContainer" containerID="4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.269795 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-9wwsx" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.299554 4806 scope.go:117] "RemoveContainer" containerID="2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.326215 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb\") pod \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.326288 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config\") pod \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.326434 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb\") pod \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.326459 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc\") pod \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.326552 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cg85h\" (UniqueName: \"kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h\") pod \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\" (UID: \"78bdea31-bfb2-4f3f-b1ff-fb246b432b84\") " Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.331107 4806 scope.go:117] "RemoveContainer" containerID="4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.334330 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h" (OuterVolumeSpecName: "kube-api-access-cg85h") pod "78bdea31-bfb2-4f3f-b1ff-fb246b432b84" (UID: "78bdea31-bfb2-4f3f-b1ff-fb246b432b84"). InnerVolumeSpecName "kube-api-access-cg85h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:56.334777 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8\": container with ID starting with 4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8 not found: ID does not exist" containerID="4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.334810 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8"} err="failed to get container status \"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8\": rpc error: code = NotFound desc = could not find container \"4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8\": container with ID starting with 4b6cce21d6f747655d917887ab2e5b003d1d6a4d4a9860af8ca1d4e0b544eab8 not found: ID does not exist" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.334832 4806 scope.go:117] "RemoveContainer" containerID="2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:56.335173 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c\": container with ID starting with 2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c not found: ID does not exist" containerID="2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.335205 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c"} err="failed to get container status \"2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c\": rpc error: code = NotFound desc = could not find container \"2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c\": container with ID starting with 2fe81ae0acafe634e0495f81ec6b88e2923839c13bbf438f49e970e0ff30382c not found: ID does not exist" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.381931 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "78bdea31-bfb2-4f3f-b1ff-fb246b432b84" (UID: "78bdea31-bfb2-4f3f-b1ff-fb246b432b84"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.384290 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "78bdea31-bfb2-4f3f-b1ff-fb246b432b84" (UID: "78bdea31-bfb2-4f3f-b1ff-fb246b432b84"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.387778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "78bdea31-bfb2-4f3f-b1ff-fb246b432b84" (UID: "78bdea31-bfb2-4f3f-b1ff-fb246b432b84"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.394618 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config" (OuterVolumeSpecName: "config") pod "78bdea31-bfb2-4f3f-b1ff-fb246b432b84" (UID: "78bdea31-bfb2-4f3f-b1ff-fb246b432b84"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.429135 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.429162 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.429173 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.429182 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.429191 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cg85h\" (UniqueName: \"kubernetes.io/projected/78bdea31-bfb2-4f3f-b1ff-fb246b432b84-kube-api-access-cg85h\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.603144 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:56.611603 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-9wwsx"] Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:57.424935 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="cdc49832-6f51-4954-ab25-3f84f6956d1f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:57.755477 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.101163 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" path="/var/lib/kubelet/pods/78bdea31-bfb2-4f3f-b1ff-fb246b432b84/volumes" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.272429 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-n88tp"] Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273176 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="dnsmasq-dns" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273194 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="dnsmasq-dns" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273206 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="init" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273214 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="init" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273229 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f31c89-0010-494d-a1d5-2db4958b10d6" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273237 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f31c89-0010-494d-a1d5-2db4958b10d6" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273253 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b13266-e80b-4462-b7fa-04b5043e53e1" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273260 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b13266-e80b-4462-b7fa-04b5043e53e1" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273278 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a1a10de-31c3-4413-b032-d10713c953dc" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273285 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a1a10de-31c3-4413-b032-d10713c953dc" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273297 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e02da-f4bb-4165-b4fc-cf65955994ae" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273304 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e02da-f4bb-4165-b4fc-cf65955994ae" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273337 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5bc050-6822-4de5-923b-3e02b79d8429" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273344 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5bc050-6822-4de5-923b-3e02b79d8429" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: E1125 15:13:58.273371 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df1cd59-5e8a-49c9-af33-4547720713f0" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.273378 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df1cd59-5e8a-49c9-af33-4547720713f0" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281107 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="78bdea31-bfb2-4f3f-b1ff-fb246b432b84" containerName="dnsmasq-dns" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281182 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b13266-e80b-4462-b7fa-04b5043e53e1" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281212 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f31c89-0010-494d-a1d5-2db4958b10d6" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281231 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5bc050-6822-4de5-923b-3e02b79d8429" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281267 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a1a10de-31c3-4413-b032-d10713c953dc" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281341 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e02da-f4bb-4165-b4fc-cf65955994ae" containerName="mariadb-database-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.281360 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df1cd59-5e8a-49c9-af33-4547720713f0" containerName="mariadb-account-create" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.282694 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.290155 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.290219 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s7t8r" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.324355 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n88tp"] Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.468763 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gncc\" (UniqueName: \"kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.468848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.468982 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.469020 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.570491 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.570551 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.570621 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gncc\" (UniqueName: \"kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.570666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.574722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.574802 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.575128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.598062 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gncc\" (UniqueName: \"kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc\") pod \"glance-db-sync-n88tp\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:58.614061 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n88tp" Nov 25 15:13:59 crc kubenswrapper[4806]: I1125 15:13:59.808261 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-n88tp"] Nov 25 15:13:59 crc kubenswrapper[4806]: W1125 15:13:59.813096 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7e521a6_108d_45db_ad10_42e394a9cd1a.slice/crio-e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54 WatchSource:0}: Error finding container e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54: Status 404 returned error can't find the container with id e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54 Nov 25 15:14:00 crc kubenswrapper[4806]: I1125 15:14:00.309126 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerStarted","Data":"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9"} Nov 25 15:14:00 crc kubenswrapper[4806]: I1125 15:14:00.310382 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n88tp" event={"ID":"e7e521a6-108d-45db-ad10-42e394a9cd1a","Type":"ContainerStarted","Data":"e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54"} Nov 25 15:14:00 crc kubenswrapper[4806]: I1125 15:14:00.341008 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=8.328791111 podStartE2EDuration="1m15.340988475s" podCreationTimestamp="2025-11-25 15:12:45 +0000 UTC" firstStartedPulling="2025-11-25 15:12:52.112845799 +0000 UTC m=+1204.764988210" lastFinishedPulling="2025-11-25 15:13:59.125043163 +0000 UTC m=+1271.777185574" observedRunningTime="2025-11-25 15:14:00.338602808 +0000 UTC m=+1272.990745219" watchObservedRunningTime="2025-11-25 15:14:00.340988475 +0000 UTC m=+1272.993130876" Nov 25 15:14:01 crc kubenswrapper[4806]: I1125 15:14:01.320821 4806 generic.go:334] "Generic (PLEG): container finished" podID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerID="75b09608f37c2be3772760339ed3e063996e9a92d36e7fb7ee974e5892679540" exitCode=0 Nov 25 15:14:01 crc kubenswrapper[4806]: I1125 15:14:01.321007 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerDied","Data":"75b09608f37c2be3772760339ed3e063996e9a92d36e7fb7ee974e5892679540"} Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.214574 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.214937 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.217587 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.238070 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:14:02 crc kubenswrapper[4806]: E1125 15:14:02.238191 4806 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:14:02 crc kubenswrapper[4806]: E1125 15:14:02.238218 4806 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:14:02 crc kubenswrapper[4806]: E1125 15:14:02.238277 4806 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift podName:837cf2fb-8640-4ac3-ad91-84ff1dba54e6 nodeName:}" failed. No retries permitted until 2025-11-25 15:14:18.238258855 +0000 UTC m=+1290.890401266 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift") pod "swift-storage-0" (UID: "837cf2fb-8640-4ac3-ad91-84ff1dba54e6") : configmap "swift-ring-files" not found Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.341343 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerStarted","Data":"608fef6ec2b49a6ff023781e28b23752c86e3af0b3fcc1ce92cc9bc1b9b06049"} Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.341941 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.343126 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:02 crc kubenswrapper[4806]: I1125 15:14:02.370954 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.191582792 podStartE2EDuration="1m25.370937759s" podCreationTimestamp="2025-11-25 15:12:37 +0000 UTC" firstStartedPulling="2025-11-25 15:12:40.113729735 +0000 UTC m=+1192.765872136" lastFinishedPulling="2025-11-25 15:13:17.293084692 +0000 UTC m=+1229.945227103" observedRunningTime="2025-11-25 15:14:02.362545982 +0000 UTC m=+1275.014688413" watchObservedRunningTime="2025-11-25 15:14:02.370937759 +0000 UTC m=+1275.023080170" Nov 25 15:14:03 crc kubenswrapper[4806]: I1125 15:14:03.353203 4806 generic.go:334] "Generic (PLEG): container finished" podID="998fc00a-139c-4c9a-9765-a445527be5aa" containerID="6d45b8c1e3d8c641a6d16091bec7ce2f47fff105c105bccec460e0337f4b4409" exitCode=0 Nov 25 15:14:03 crc kubenswrapper[4806]: I1125 15:14:03.353373 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpqhp" event={"ID":"998fc00a-139c-4c9a-9765-a445527be5aa","Type":"ContainerDied","Data":"6d45b8c1e3d8c641a6d16091bec7ce2f47fff105c105bccec460e0337f4b4409"} Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.017878 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.020407 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-svmbm" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.282551 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-l6mv2-config-pns6x"] Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.290666 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.293043 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.294901 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l6mv2-config-pns6x"] Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.377716 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.377828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.377859 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.377896 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.377979 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9qvd\" (UniqueName: \"kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.378059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481381 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481467 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481502 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481518 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481543 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481602 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9qvd\" (UniqueName: \"kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481745 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.481788 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.482751 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.482823 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.483870 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.516222 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9qvd\" (UniqueName: \"kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd\") pod \"ovn-controller-l6mv2-config-pns6x\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.620818 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.840240 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995373 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnwfk\" (UniqueName: \"kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995681 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995745 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995832 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995862 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995909 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.995938 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift\") pod \"998fc00a-139c-4c9a-9765-a445527be5aa\" (UID: \"998fc00a-139c-4c9a-9765-a445527be5aa\") " Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.996958 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:04 crc kubenswrapper[4806]: I1125 15:14:04.997452 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.003544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk" (OuterVolumeSpecName: "kube-api-access-gnwfk") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "kube-api-access-gnwfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.031375 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts" (OuterVolumeSpecName: "scripts") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.033842 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.052612 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.070366 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "998fc00a-139c-4c9a-9765-a445527be5aa" (UID: "998fc00a-139c-4c9a-9765-a445527be5aa"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098212 4806 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098260 4806 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/998fc00a-139c-4c9a-9765-a445527be5aa-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098274 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnwfk\" (UniqueName: \"kubernetes.io/projected/998fc00a-139c-4c9a-9765-a445527be5aa-kube-api-access-gnwfk\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098286 4806 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098299 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098310 4806 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/998fc00a-139c-4c9a-9765-a445527be5aa-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.098336 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/998fc00a-139c-4c9a-9765-a445527be5aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.216007 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-l6mv2-config-pns6x"] Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.294438 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.389012 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wpqhp" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.389007 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wpqhp" event={"ID":"998fc00a-139c-4c9a-9765-a445527be5aa","Type":"ContainerDied","Data":"fdd4a87db855f09c5c89a0d4c2dbf19d8a95b11109b819365befc78e0ca9bdf0"} Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.389159 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd4a87db855f09c5c89a0d4c2dbf19d8a95b11109b819365befc78e0ca9bdf0" Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.394211 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="prometheus" containerID="cri-o://0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" gracePeriod=600 Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.394583 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l6mv2-config-pns6x" event={"ID":"b86a1a43-24d2-4ee1-b666-f43b062cc0d0","Type":"ContainerStarted","Data":"554c85720be1eaefc6dd5ea6e1ef8fd9bf4309bfbf0be26c8837ef4fd882bd13"} Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.394943 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="thanos-sidecar" containerID="cri-o://4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" gracePeriod=600 Nov 25 15:14:05 crc kubenswrapper[4806]: I1125 15:14:05.395013 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="config-reloader" containerID="cri-o://33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" gracePeriod=600 Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.113815 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.218893 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.218977 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219018 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219047 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219141 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219273 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219355 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.219429 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x5mx\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx\") pod \"01548134-90ee-4d44-ab5e-60a0933ee1ea\" (UID: \"01548134-90ee-4d44-ab5e-60a0933ee1ea\") " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.220436 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.230953 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config" (OuterVolumeSpecName: "config") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.231677 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx" (OuterVolumeSpecName: "kube-api-access-4x5mx") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "kube-api-access-4x5mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.233448 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out" (OuterVolumeSpecName: "config-out") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.235077 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.237138 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.260195 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.298173 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config" (OuterVolumeSpecName: "web-config") pod "01548134-90ee-4d44-ab5e-60a0933ee1ea" (UID: "01548134-90ee-4d44-ab5e-60a0933ee1ea"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321495 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321552 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") on node \"crc\" " Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321565 4806 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321578 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x5mx\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-kube-api-access-4x5mx\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321588 4806 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01548134-90ee-4d44-ab5e-60a0933ee1ea-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321598 4806 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01548134-90ee-4d44-ab5e-60a0933ee1ea-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321609 4806 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01548134-90ee-4d44-ab5e-60a0933ee1ea-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.321619 4806 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/01548134-90ee-4d44-ab5e-60a0933ee1ea-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.345525 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.345686 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287") on node "crc" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.408786 4806 generic.go:334] "Generic (PLEG): container finished" podID="b86a1a43-24d2-4ee1-b666-f43b062cc0d0" containerID="b71e9474472d6f2e5186906b1e3ed18ae3942a9cf4b1f91e59d25c9a9cc86e36" exitCode=0 Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.408905 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l6mv2-config-pns6x" event={"ID":"b86a1a43-24d2-4ee1-b666-f43b062cc0d0","Type":"ContainerDied","Data":"b71e9474472d6f2e5186906b1e3ed18ae3942a9cf4b1f91e59d25c9a9cc86e36"} Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412162 4806 generic.go:334] "Generic (PLEG): container finished" podID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" exitCode=0 Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412188 4806 generic.go:334] "Generic (PLEG): container finished" podID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" exitCode=0 Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412196 4806 generic.go:334] "Generic (PLEG): container finished" podID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" exitCode=0 Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412216 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerDied","Data":"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9"} Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412240 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerDied","Data":"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184"} Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412249 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerDied","Data":"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037"} Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412258 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"01548134-90ee-4d44-ab5e-60a0933ee1ea","Type":"ContainerDied","Data":"2138b165c5f647f03214a9ef259bdfa1b649fd7b209066f559c823dd4a0c371c"} Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412246 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.412274 4806 scope.go:117] "RemoveContainer" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.423097 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.438368 4806 scope.go:117] "RemoveContainer" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.466973 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.473544 4806 scope.go:117] "RemoveContainer" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.477829 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492353 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.492767 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="prometheus" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492785 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="prometheus" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.492802 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="thanos-sidecar" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492809 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="thanos-sidecar" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.492821 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="config-reloader" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492828 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="config-reloader" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.492842 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="init-config-reloader" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492848 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="init-config-reloader" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.492863 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998fc00a-139c-4c9a-9765-a445527be5aa" containerName="swift-ring-rebalance" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.492868 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="998fc00a-139c-4c9a-9765-a445527be5aa" containerName="swift-ring-rebalance" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.493048 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="prometheus" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.493065 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="thanos-sidecar" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.493074 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="998fc00a-139c-4c9a-9765-a445527be5aa" containerName="swift-ring-rebalance" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.493084 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" containerName="config-reloader" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.495155 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.497194 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.499021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.499511 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.499709 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8x9zw" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.501346 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.502391 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.505546 4806 scope.go:117] "RemoveContainer" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.510040 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.518569 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.607439 4806 scope.go:117] "RemoveContainer" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.609592 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": container with ID starting with 4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9 not found: ID does not exist" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.609742 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9"} err="failed to get container status \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": rpc error: code = NotFound desc = could not find container \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": container with ID starting with 4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.609845 4806 scope.go:117] "RemoveContainer" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.610691 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": container with ID starting with 33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184 not found: ID does not exist" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.610734 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184"} err="failed to get container status \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": rpc error: code = NotFound desc = could not find container \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": container with ID starting with 33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.610763 4806 scope.go:117] "RemoveContainer" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.614989 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": container with ID starting with 0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037 not found: ID does not exist" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615018 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037"} err="failed to get container status \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": rpc error: code = NotFound desc = could not find container \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": container with ID starting with 0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615037 4806 scope.go:117] "RemoveContainer" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" Nov 25 15:14:06 crc kubenswrapper[4806]: E1125 15:14:06.615598 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": container with ID starting with c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75 not found: ID does not exist" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615625 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75"} err="failed to get container status \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": rpc error: code = NotFound desc = could not find container \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": container with ID starting with c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615643 4806 scope.go:117] "RemoveContainer" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615955 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9"} err="failed to get container status \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": rpc error: code = NotFound desc = could not find container \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": container with ID starting with 4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.615973 4806 scope.go:117] "RemoveContainer" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.616281 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184"} err="failed to get container status \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": rpc error: code = NotFound desc = could not find container \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": container with ID starting with 33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.616300 4806 scope.go:117] "RemoveContainer" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.616712 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037"} err="failed to get container status \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": rpc error: code = NotFound desc = could not find container \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": container with ID starting with 0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.616730 4806 scope.go:117] "RemoveContainer" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617034 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75"} err="failed to get container status \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": rpc error: code = NotFound desc = could not find container \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": container with ID starting with c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617051 4806 scope.go:117] "RemoveContainer" containerID="4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617259 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9"} err="failed to get container status \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": rpc error: code = NotFound desc = could not find container \"4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9\": container with ID starting with 4d5bc304aa7e5be3307eb1f0963b066092393032eafa5eb17824f238ab5681e9 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617276 4806 scope.go:117] "RemoveContainer" containerID="33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617711 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184"} err="failed to get container status \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": rpc error: code = NotFound desc = could not find container \"33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184\": container with ID starting with 33e3c73f4472b9ae679e6a13346a9d19821a680be812b89632631c6415783184 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.617754 4806 scope.go:117] "RemoveContainer" containerID="0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.618066 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037"} err="failed to get container status \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": rpc error: code = NotFound desc = could not find container \"0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037\": container with ID starting with 0d57b6d1d7f00d4efafcc844f9e47b3d1b13953c476ac6a3517aa59b27d2b037 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.618086 4806 scope.go:117] "RemoveContainer" containerID="c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.618284 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75"} err="failed to get container status \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": rpc error: code = NotFound desc = could not find container \"c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75\": container with ID starting with c474c7b47d58100702d7c63f63d32548b20df2d884ef8a139b51efe4f42cbe75 not found: ID does not exist" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.630113 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.630260 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.630492 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9slxt\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-kube-api-access-9slxt\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.630573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.630797 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631035 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631345 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631510 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aafcef1f-4988-49d1-88f0-47a44d8f18fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631633 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.631767 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.742019 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.742124 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.742853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.742902 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.742936 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743034 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aafcef1f-4988-49d1-88f0-47a44d8f18fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743149 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743273 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.743412 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9slxt\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-kube-api-access-9slxt\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.744959 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/aafcef1f-4988-49d1-88f0-47a44d8f18fc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.746236 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.747581 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.747953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.748686 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.750535 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.750570 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b3a8672825276a13a5527ac11d1dc07a9dde209d1a0c9593ce9ca59149f844e0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.750984 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/aafcef1f-4988-49d1-88f0-47a44d8f18fc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.753602 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.753731 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.766415 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/aafcef1f-4988-49d1-88f0-47a44d8f18fc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.770393 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9slxt\" (UniqueName: \"kubernetes.io/projected/aafcef1f-4988-49d1-88f0-47a44d8f18fc-kube-api-access-9slxt\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.781115 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5055b2b2-b3b6-41c9-9ffd-93c9ef2d6287\") pod \"prometheus-metric-storage-0\" (UID: \"aafcef1f-4988-49d1-88f0-47a44d8f18fc\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:06 crc kubenswrapper[4806]: I1125 15:14:06.814342 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:07 crc kubenswrapper[4806]: I1125 15:14:07.271623 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:14:07 crc kubenswrapper[4806]: I1125 15:14:07.428408 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="cdc49832-6f51-4954-ab25-3f84f6956d1f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:14:07 crc kubenswrapper[4806]: E1125 15:14:07.963663 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:14:08 crc kubenswrapper[4806]: I1125 15:14:08.103054 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01548134-90ee-4d44-ab5e-60a0933ee1ea" path="/var/lib/kubelet/pods/01548134-90ee-4d44-ab5e-60a0933ee1ea/volumes" Nov 25 15:14:08 crc kubenswrapper[4806]: I1125 15:14:08.952941 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-l6mv2" Nov 25 15:14:09 crc kubenswrapper[4806]: I1125 15:14:09.904636 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:14:17 crc kubenswrapper[4806]: I1125 15:14:17.426775 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="cdc49832-6f51-4954-ab25-3f84f6956d1f" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:14:18 crc kubenswrapper[4806]: E1125 15:14:18.187990 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.323299 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.330236 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/837cf2fb-8640-4ac3-ad91-84ff1dba54e6-etc-swift\") pod \"swift-storage-0\" (UID: \"837cf2fb-8640-4ac3-ad91-84ff1dba54e6\") " pod="openstack/swift-storage-0" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.376987 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.934388 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.934449 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.934495 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.935248 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:14:18 crc kubenswrapper[4806]: I1125 15:14:18.935324 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4" gracePeriod=600 Nov 25 15:14:19 crc kubenswrapper[4806]: I1125 15:14:19.496587 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Nov 25 15:14:20 crc kubenswrapper[4806]: I1125 15:14:20.555273 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4" exitCode=0 Nov 25 15:14:20 crc kubenswrapper[4806]: I1125 15:14:20.555355 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4"} Nov 25 15:14:20 crc kubenswrapper[4806]: I1125 15:14:20.555428 4806 scope.go:117] "RemoveContainer" containerID="83d1d99b89679065a33ab9c018ccbf4f6cc67e15cf7be7b0e62af90abdf246e5" Nov 25 15:14:20 crc kubenswrapper[4806]: W1125 15:14:20.760617 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaafcef1f_4988_49d1_88f0_47a44d8f18fc.slice/crio-b1e9c6c5755a71a10f2a813ef1c3b672a17f95bd60fa1ac4bcb3f68aeb237cae WatchSource:0}: Error finding container b1e9c6c5755a71a10f2a813ef1c3b672a17f95bd60fa1ac4bcb3f68aeb237cae: Status 404 returned error can't find the container with id b1e9c6c5755a71a10f2a813ef1c3b672a17f95bd60fa1ac4bcb3f68aeb237cae Nov 25 15:14:20 crc kubenswrapper[4806]: I1125 15:14:20.908911 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012602 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012638 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012695 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012778 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9qvd\" (UniqueName: \"kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012810 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.012841 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts\") pod \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\" (UID: \"b86a1a43-24d2-4ee1-b666-f43b062cc0d0\") " Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.013393 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.013737 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.013789 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run" (OuterVolumeSpecName: "var-run") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.014165 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts" (OuterVolumeSpecName: "scripts") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.014810 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.032206 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd" (OuterVolumeSpecName: "kube-api-access-h9qvd") pod "b86a1a43-24d2-4ee1-b666-f43b062cc0d0" (UID: "b86a1a43-24d2-4ee1-b666-f43b062cc0d0"). InnerVolumeSpecName "kube-api-access-h9qvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115292 4806 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115344 4806 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115356 4806 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115365 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115374 4806 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.115384 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9qvd\" (UniqueName: \"kubernetes.io/projected/b86a1a43-24d2-4ee1-b666-f43b062cc0d0-kube-api-access-h9qvd\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.380576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:14:21 crc kubenswrapper[4806]: W1125 15:14:21.383460 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod837cf2fb_8640_4ac3_ad91_84ff1dba54e6.slice/crio-7d213b8947d7b37e49c0163b45a846242d18b025d2574895cf9b3ba78e32ac3f WatchSource:0}: Error finding container 7d213b8947d7b37e49c0163b45a846242d18b025d2574895cf9b3ba78e32ac3f: Status 404 returned error can't find the container with id 7d213b8947d7b37e49c0163b45a846242d18b025d2574895cf9b3ba78e32ac3f Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.569306 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerStarted","Data":"b1e9c6c5755a71a10f2a813ef1c3b672a17f95bd60fa1ac4bcb3f68aeb237cae"} Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.572827 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-l6mv2-config-pns6x" event={"ID":"b86a1a43-24d2-4ee1-b666-f43b062cc0d0","Type":"ContainerDied","Data":"554c85720be1eaefc6dd5ea6e1ef8fd9bf4309bfbf0be26c8837ef4fd882bd13"} Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.572876 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="554c85720be1eaefc6dd5ea6e1ef8fd9bf4309bfbf0be26c8837ef4fd882bd13" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.572952 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-l6mv2-config-pns6x" Nov 25 15:14:21 crc kubenswrapper[4806]: I1125 15:14:21.575493 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"7d213b8947d7b37e49c0163b45a846242d18b025d2574895cf9b3ba78e32ac3f"} Nov 25 15:14:22 crc kubenswrapper[4806]: I1125 15:14:22.006896 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-l6mv2-config-pns6x"] Nov 25 15:14:22 crc kubenswrapper[4806]: I1125 15:14:22.013991 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-l6mv2-config-pns6x"] Nov 25 15:14:22 crc kubenswrapper[4806]: I1125 15:14:22.099922 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86a1a43-24d2-4ee1-b666-f43b062cc0d0" path="/var/lib/kubelet/pods/b86a1a43-24d2-4ee1-b666-f43b062cc0d0/volumes" Nov 25 15:14:25 crc kubenswrapper[4806]: I1125 15:14:25.614134 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1"} Nov 25 15:14:26 crc kubenswrapper[4806]: E1125 15:14:26.412292 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 25 15:14:26 crc kubenswrapper[4806]: E1125 15:14:26.412516 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9gncc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-n88tp_openstack(e7e521a6-108d-45db-ad10-42e394a9cd1a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:14:26 crc kubenswrapper[4806]: E1125 15:14:26.413740 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-n88tp" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" Nov 25 15:14:26 crc kubenswrapper[4806]: E1125 15:14:26.622827 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-n88tp" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" Nov 25 15:14:27 crc kubenswrapper[4806]: I1125 15:14:27.425695 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Nov 25 15:14:27 crc kubenswrapper[4806]: I1125 15:14:27.632756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerStarted","Data":"1d9ece32eb4ed3b1825ec1c23aa4f81acee080be47946111247f8946670a3393"} Nov 25 15:14:28 crc kubenswrapper[4806]: E1125 15:14:28.446386 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:14:28 crc kubenswrapper[4806]: I1125 15:14:28.645105 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"5ca2934feef579d1193d9d13a06fff7f5b743c76b2a3187471c37da8818a4888"} Nov 25 15:14:28 crc kubenswrapper[4806]: I1125 15:14:28.646305 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"35d189edaecd15cf32dbd9667562e197078602d6dcf0fc4e33dfba28a31f896e"} Nov 25 15:14:28 crc kubenswrapper[4806]: I1125 15:14:28.646397 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"b8db6453162cdbbceda5308a338c6e7503041561145ec6cec02b57690d7e72b8"} Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.497917 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.683182 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"94ad3f9904f25b5f0eac0477e333b0b1768347ab76790ef6f0494ff2f70b91b3"} Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.962212 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-4sh7f"] Nov 25 15:14:29 crc kubenswrapper[4806]: E1125 15:14:29.962674 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b86a1a43-24d2-4ee1-b666-f43b062cc0d0" containerName="ovn-config" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.962696 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86a1a43-24d2-4ee1-b666-f43b062cc0d0" containerName="ovn-config" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.962912 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b86a1a43-24d2-4ee1-b666-f43b062cc0d0" containerName="ovn-config" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.963731 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.977287 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-d5b6-account-create-8g9dc"] Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.978816 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.983724 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 15:14:29 crc kubenswrapper[4806]: I1125 15:14:29.991029 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-4sh7f"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.046605 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d5b6-account-create-8g9dc"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.059718 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rknkz"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.106600 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.120950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmd48\" (UniqueName: \"kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.121064 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.121301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.123078 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k848r\" (UniqueName: \"kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.164127 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5265-account-create-vr75r"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.165935 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.178049 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rknkz"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.186950 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.196413 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5265-account-create-vr75r"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.212049 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-k5fg9"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.213673 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.225974 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgx6\" (UniqueName: \"kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.226053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.226151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k848r\" (UniqueName: \"kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.226287 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.226332 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmd48\" (UniqueName: \"kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.226375 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.228524 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.231983 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.238690 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5fg9"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.275387 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k848r\" (UniqueName: \"kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r\") pod \"cinder-d5b6-account-create-8g9dc\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.276644 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmd48\" (UniqueName: \"kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48\") pod \"cloudkitty-db-create-4sh7f\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.290568 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.327804 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccpz\" (UniqueName: \"kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.327864 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.327889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.327950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.327983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgx6\" (UniqueName: \"kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.328039 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjl5m\" (UniqueName: \"kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.328499 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.328958 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.346929 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-xnqxm"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.349146 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xnqxm"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.349435 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.354954 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.355163 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.355555 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.355715 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nmg8l" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.362692 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgx6\" (UniqueName: \"kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6\") pod \"barbican-db-create-rknkz\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.429348 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bccpz\" (UniqueName: \"kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.429404 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.429461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.430534 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.430567 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjl5m\" (UniqueName: \"kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.431122 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.457940 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bccpz\" (UniqueName: \"kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz\") pod \"barbican-5265-account-create-vr75r\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.460656 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjl5m\" (UniqueName: \"kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m\") pod \"cinder-db-create-k5fg9\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.472709 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-8c9d-account-create-rx52p"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.473917 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.476895 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.478701 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.494218 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-8c9d-account-create-rx52p"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.505221 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.532452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.532522 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxhk\" (UniqueName: \"kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.532630 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.551506 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-vrlpg"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.552964 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.562642 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vrlpg"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.633741 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.633792 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.633907 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.633933 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxhk\" (UniqueName: \"kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.633950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2h5f\" (UniqueName: \"kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.638280 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.639002 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.647030 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.654733 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-56ac-account-create-vvjww"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.656138 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.660967 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxhk\" (UniqueName: \"kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk\") pod \"keystone-db-sync-xnqxm\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.665009 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.672819 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56ac-account-create-vvjww"] Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.731355 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.735414 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5ln\" (UniqueName: \"kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.735482 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2h5f\" (UniqueName: \"kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.735564 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.735591 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.736288 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.755549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2h5f\" (UniqueName: \"kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f\") pod \"cloudkitty-8c9d-account-create-rx52p\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.837407 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.837470 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.837572 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqntc\" (UniqueName: \"kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.837656 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v5ln\" (UniqueName: \"kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.838233 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.847821 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.855949 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v5ln\" (UniqueName: \"kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln\") pod \"neutron-db-create-vrlpg\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.870095 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.939598 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.939691 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqntc\" (UniqueName: \"kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.940709 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.960092 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqntc\" (UniqueName: \"kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc\") pod \"neutron-56ac-account-create-vvjww\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:30 crc kubenswrapper[4806]: I1125 15:14:30.978999 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:31 crc kubenswrapper[4806]: I1125 15:14:31.720707 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"77ec654e7cd4f036bfbe047e68bc1ac993f7bda736a6a022be10b9a1c671371a"} Nov 25 15:14:31 crc kubenswrapper[4806]: I1125 15:14:31.808214 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-k5fg9"] Nov 25 15:14:31 crc kubenswrapper[4806]: W1125 15:14:31.816334 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79e37330_4341_48fc_b9d5_bd0403e6237a.slice/crio-189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44 WatchSource:0}: Error finding container 189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44: Status 404 returned error can't find the container with id 189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44 Nov 25 15:14:31 crc kubenswrapper[4806]: I1125 15:14:31.950589 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-4sh7f"] Nov 25 15:14:31 crc kubenswrapper[4806]: I1125 15:14:31.962510 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-xnqxm"] Nov 25 15:14:31 crc kubenswrapper[4806]: I1125 15:14:31.983014 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-d5b6-account-create-8g9dc"] Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.176152 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5265-account-create-vr75r"] Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.188480 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-56ac-account-create-vvjww"] Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.214964 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rknkz"] Nov 25 15:14:32 crc kubenswrapper[4806]: W1125 15:14:32.224469 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9115000_6aab_492e_925f_f44a574b5009.slice/crio-264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8 WatchSource:0}: Error finding container 264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8: Status 404 returned error can't find the container with id 264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8 Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.224784 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-8c9d-account-create-rx52p"] Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.236471 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-vrlpg"] Nov 25 15:14:32 crc kubenswrapper[4806]: W1125 15:14:32.272994 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67e9a65f_5f3c_47fa_964a_f188158f77bc.slice/crio-535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc WatchSource:0}: Error finding container 535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc: Status 404 returned error can't find the container with id 535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.781486 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"40060977d497e9816e73fe1c7b02a230c7f7f86f242f34d33b83bb6e062c164a"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.781894 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"f5e08295ad34cac74a5ccfd5244a539db04123a296f8ce720ce491a00f01e9d6"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.797239 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5265-account-create-vr75r" event={"ID":"62cc8598-cf68-4bb3-b272-ab87683edf6b","Type":"ContainerStarted","Data":"571fa43d0d6b745cbcfa58b47a8a825f03d16f61189f223cac4e60b16b655554"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.812052 4806 generic.go:334] "Generic (PLEG): container finished" podID="94278b3c-2207-463b-9700-e8ab16c72b5b" containerID="dcd4f748224faf941aef0075ac2b144712f1dc2665b6fe13a338d43bebd29ae7" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.812153 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d5b6-account-create-8g9dc" event={"ID":"94278b3c-2207-463b-9700-e8ab16c72b5b","Type":"ContainerDied","Data":"dcd4f748224faf941aef0075ac2b144712f1dc2665b6fe13a338d43bebd29ae7"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.812183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d5b6-account-create-8g9dc" event={"ID":"94278b3c-2207-463b-9700-e8ab16c72b5b","Type":"ContainerStarted","Data":"9346cc4cd439e6c6ea04ce0f519727efbf95bee1207f51896b7db243478356f1"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.821617 4806 generic.go:334] "Generic (PLEG): container finished" podID="79e37330-4341-48fc-b9d5-bd0403e6237a" containerID="fcf783791588ec718ca7cc8d58556d5da256261cab4042709ceb061a6a9bba63" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.821720 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5fg9" event={"ID":"79e37330-4341-48fc-b9d5-bd0403e6237a","Type":"ContainerDied","Data":"fcf783791588ec718ca7cc8d58556d5da256261cab4042709ceb061a6a9bba63"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.821782 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5fg9" event={"ID":"79e37330-4341-48fc-b9d5-bd0403e6237a","Type":"ContainerStarted","Data":"189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.824233 4806 generic.go:334] "Generic (PLEG): container finished" podID="aafcef1f-4988-49d1-88f0-47a44d8f18fc" containerID="1d9ece32eb4ed3b1825ec1c23aa4f81acee080be47946111247f8946670a3393" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.824363 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerDied","Data":"1d9ece32eb4ed3b1825ec1c23aa4f81acee080be47946111247f8946670a3393"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.832223 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56ac-account-create-vvjww" event={"ID":"61467ee5-3ddb-4d7d-88d3-e48107c51338","Type":"ContainerStarted","Data":"e891d0f38e0e67de79accb16f3271da576a1f24875ee1f066380728547e213ac"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.835757 4806 generic.go:334] "Generic (PLEG): container finished" podID="2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" containerID="dcaddd1007730a613ec5b775a5257a05c342aaec816b5cde44f715a00e08a792" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.835824 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-4sh7f" event={"ID":"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa","Type":"ContainerDied","Data":"dcaddd1007730a613ec5b775a5257a05c342aaec816b5cde44f715a00e08a792"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.835854 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-4sh7f" event={"ID":"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa","Type":"ContainerStarted","Data":"bce1b3e239c656979a18b70726cec9f8d7395e4759194d4de0f67a71ce079b44"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.839153 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rknkz" event={"ID":"a9115000-6aab-492e-925f-f44a574b5009","Type":"ContainerStarted","Data":"264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.847037 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrlpg" event={"ID":"5eccd330-3d33-48e3-929b-2a67bb643af7","Type":"ContainerStarted","Data":"9ee41f9330aec83a8e2329d48b0b91f0fc18a9c36b1c7438c6eea8b6f442af0c"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.848269 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xnqxm" event={"ID":"634468c1-6446-422a-9816-b19afdf8858d","Type":"ContainerStarted","Data":"60f99f71d296653822d2830475de1ef31f3a5752ba201c70de80713161f1e805"} Nov 25 15:14:32 crc kubenswrapper[4806]: I1125 15:14:32.849789 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8c9d-account-create-rx52p" event={"ID":"67e9a65f-5f3c-47fa-964a-f188158f77bc","Type":"ContainerStarted","Data":"535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.861010 4806 generic.go:334] "Generic (PLEG): container finished" podID="a9115000-6aab-492e-925f-f44a574b5009" containerID="055d68e8d3b049aa80cb3e5340ffb854d62c839a8235381b11f1b6dd5db0579c" exitCode=0 Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.861183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rknkz" event={"ID":"a9115000-6aab-492e-925f-f44a574b5009","Type":"ContainerDied","Data":"055d68e8d3b049aa80cb3e5340ffb854d62c839a8235381b11f1b6dd5db0579c"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.864649 4806 generic.go:334] "Generic (PLEG): container finished" podID="5eccd330-3d33-48e3-929b-2a67bb643af7" containerID="b5d072863c76d7b6c081ccaa02c0a78121cdc2061a426894749b4048300332ee" exitCode=0 Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.864729 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrlpg" event={"ID":"5eccd330-3d33-48e3-929b-2a67bb643af7","Type":"ContainerDied","Data":"b5d072863c76d7b6c081ccaa02c0a78121cdc2061a426894749b4048300332ee"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.867690 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerStarted","Data":"7106bcc838ebfb9e4cff719bd5480d7a9c69250950b22dad7f48ca6298d8bccc"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.869154 4806 generic.go:334] "Generic (PLEG): container finished" podID="67e9a65f-5f3c-47fa-964a-f188158f77bc" containerID="5f235c5c1d09c3398a1b4d4f6cc9714f67f41ebc8f93c551803863da553f2955" exitCode=0 Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.869312 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8c9d-account-create-rx52p" event={"ID":"67e9a65f-5f3c-47fa-964a-f188158f77bc","Type":"ContainerDied","Data":"5f235c5c1d09c3398a1b4d4f6cc9714f67f41ebc8f93c551803863da553f2955"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.871093 4806 generic.go:334] "Generic (PLEG): container finished" podID="61467ee5-3ddb-4d7d-88d3-e48107c51338" containerID="d6ae052d0e9dd5d5ef41e888eccca8a25ae005c1d2bc396324b9fe50a7646f1c" exitCode=0 Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.871156 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56ac-account-create-vvjww" event={"ID":"61467ee5-3ddb-4d7d-88d3-e48107c51338","Type":"ContainerDied","Data":"d6ae052d0e9dd5d5ef41e888eccca8a25ae005c1d2bc396324b9fe50a7646f1c"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.881760 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"e7107bed6a3f90f0f81efe1dd5fc5a974cc34ef932cb3b233701121ccb4fb8ef"} Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.884132 4806 generic.go:334] "Generic (PLEG): container finished" podID="62cc8598-cf68-4bb3-b272-ab87683edf6b" containerID="08383c7c22f34f950c15aabb4fe56b4bf61f9ca2db81584bfe5201891f079251" exitCode=0 Nov 25 15:14:33 crc kubenswrapper[4806]: I1125 15:14:33.884526 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5265-account-create-vr75r" event={"ID":"62cc8598-cf68-4bb3-b272-ab87683edf6b","Type":"ContainerDied","Data":"08383c7c22f34f950c15aabb4fe56b4bf61f9ca2db81584bfe5201891f079251"} Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.519420 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.524856 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.556782 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629270 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts\") pod \"79e37330-4341-48fc-b9d5-bd0403e6237a\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629389 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts\") pod \"94278b3c-2207-463b-9700-e8ab16c72b5b\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629423 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjl5m\" (UniqueName: \"kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m\") pod \"79e37330-4341-48fc-b9d5-bd0403e6237a\" (UID: \"79e37330-4341-48fc-b9d5-bd0403e6237a\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629465 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts\") pod \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629501 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k848r\" (UniqueName: \"kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r\") pod \"94278b3c-2207-463b-9700-e8ab16c72b5b\" (UID: \"94278b3c-2207-463b-9700-e8ab16c72b5b\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.629614 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmd48\" (UniqueName: \"kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48\") pod \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\" (UID: \"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa\") " Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.630404 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94278b3c-2207-463b-9700-e8ab16c72b5b" (UID: "94278b3c-2207-463b-9700-e8ab16c72b5b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.630503 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79e37330-4341-48fc-b9d5-bd0403e6237a" (UID: "79e37330-4341-48fc-b9d5-bd0403e6237a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.631427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" (UID: "2d7a2080-b9b4-4a5d-8c23-905ee26d6afa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.636365 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48" (OuterVolumeSpecName: "kube-api-access-gmd48") pod "2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" (UID: "2d7a2080-b9b4-4a5d-8c23-905ee26d6afa"). InnerVolumeSpecName "kube-api-access-gmd48". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.650994 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r" (OuterVolumeSpecName: "kube-api-access-k848r") pod "94278b3c-2207-463b-9700-e8ab16c72b5b" (UID: "94278b3c-2207-463b-9700-e8ab16c72b5b"). InnerVolumeSpecName "kube-api-access-k848r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.656785 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m" (OuterVolumeSpecName: "kube-api-access-vjl5m") pod "79e37330-4341-48fc-b9d5-bd0403e6237a" (UID: "79e37330-4341-48fc-b9d5-bd0403e6237a"). InnerVolumeSpecName "kube-api-access-vjl5m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733793 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmd48\" (UniqueName: \"kubernetes.io/projected/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-kube-api-access-gmd48\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733869 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79e37330-4341-48fc-b9d5-bd0403e6237a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733885 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94278b3c-2207-463b-9700-e8ab16c72b5b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733898 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjl5m\" (UniqueName: \"kubernetes.io/projected/79e37330-4341-48fc-b9d5-bd0403e6237a-kube-api-access-vjl5m\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733920 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.733935 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k848r\" (UniqueName: \"kubernetes.io/projected/94278b3c-2207-463b-9700-e8ab16c72b5b-kube-api-access-k848r\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.899284 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-4sh7f" event={"ID":"2d7a2080-b9b4-4a5d-8c23-905ee26d6afa","Type":"ContainerDied","Data":"bce1b3e239c656979a18b70726cec9f8d7395e4759194d4de0f67a71ce079b44"} Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.899345 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce1b3e239c656979a18b70726cec9f8d7395e4759194d4de0f67a71ce079b44" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.900116 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-4sh7f" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.901094 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-d5b6-account-create-8g9dc" event={"ID":"94278b3c-2207-463b-9700-e8ab16c72b5b","Type":"ContainerDied","Data":"9346cc4cd439e6c6ea04ce0f519727efbf95bee1207f51896b7db243478356f1"} Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.901134 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9346cc4cd439e6c6ea04ce0f519727efbf95bee1207f51896b7db243478356f1" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.901188 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-d5b6-account-create-8g9dc" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.909229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-k5fg9" event={"ID":"79e37330-4341-48fc-b9d5-bd0403e6237a","Type":"ContainerDied","Data":"189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44"} Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.909273 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189518c850f9780ad7108714789f7fc139921fffcf0782a171e0c9d1604dcb44" Nov 25 15:14:34 crc kubenswrapper[4806]: I1125 15:14:34.909595 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-k5fg9" Nov 25 15:14:36 crc kubenswrapper[4806]: I1125 15:14:36.930051 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerStarted","Data":"b46008722919dbb09b4c9aa1dd115d68c90b661749b2e749a4e36a11746235d0"} Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.206270 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.217081 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.232487 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.253026 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.273408 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.284171 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts\") pod \"5eccd330-3d33-48e3-929b-2a67bb643af7\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.284259 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v5ln\" (UniqueName: \"kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln\") pod \"5eccd330-3d33-48e3-929b-2a67bb643af7\" (UID: \"5eccd330-3d33-48e3-929b-2a67bb643af7\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.285540 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5eccd330-3d33-48e3-929b-2a67bb643af7" (UID: "5eccd330-3d33-48e3-929b-2a67bb643af7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.301188 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln" (OuterVolumeSpecName: "kube-api-access-2v5ln") pod "5eccd330-3d33-48e3-929b-2a67bb643af7" (UID: "5eccd330-3d33-48e3-929b-2a67bb643af7"). InnerVolumeSpecName "kube-api-access-2v5ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386413 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts\") pod \"62cc8598-cf68-4bb3-b272-ab87683edf6b\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386501 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvgx6\" (UniqueName: \"kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6\") pod \"a9115000-6aab-492e-925f-f44a574b5009\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386544 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqntc\" (UniqueName: \"kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc\") pod \"61467ee5-3ddb-4d7d-88d3-e48107c51338\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386593 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts\") pod \"67e9a65f-5f3c-47fa-964a-f188158f77bc\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386774 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bccpz\" (UniqueName: \"kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz\") pod \"62cc8598-cf68-4bb3-b272-ab87683edf6b\" (UID: \"62cc8598-cf68-4bb3-b272-ab87683edf6b\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.386913 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2h5f\" (UniqueName: \"kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f\") pod \"67e9a65f-5f3c-47fa-964a-f188158f77bc\" (UID: \"67e9a65f-5f3c-47fa-964a-f188158f77bc\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387081 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts\") pod \"61467ee5-3ddb-4d7d-88d3-e48107c51338\" (UID: \"61467ee5-3ddb-4d7d-88d3-e48107c51338\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387149 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts\") pod \"a9115000-6aab-492e-925f-f44a574b5009\" (UID: \"a9115000-6aab-492e-925f-f44a574b5009\") " Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387191 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62cc8598-cf68-4bb3-b272-ab87683edf6b" (UID: "62cc8598-cf68-4bb3-b272-ab87683edf6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387274 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67e9a65f-5f3c-47fa-964a-f188158f77bc" (UID: "67e9a65f-5f3c-47fa-964a-f188158f77bc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387702 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61467ee5-3ddb-4d7d-88d3-e48107c51338" (UID: "61467ee5-3ddb-4d7d-88d3-e48107c51338"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387721 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62cc8598-cf68-4bb3-b272-ab87683edf6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387748 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67e9a65f-5f3c-47fa-964a-f188158f77bc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387765 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eccd330-3d33-48e3-929b-2a67bb643af7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387782 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v5ln\" (UniqueName: \"kubernetes.io/projected/5eccd330-3d33-48e3-929b-2a67bb643af7-kube-api-access-2v5ln\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.387727 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9115000-6aab-492e-925f-f44a574b5009" (UID: "a9115000-6aab-492e-925f-f44a574b5009"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.390447 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6" (OuterVolumeSpecName: "kube-api-access-kvgx6") pod "a9115000-6aab-492e-925f-f44a574b5009" (UID: "a9115000-6aab-492e-925f-f44a574b5009"). InnerVolumeSpecName "kube-api-access-kvgx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.390479 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f" (OuterVolumeSpecName: "kube-api-access-c2h5f") pod "67e9a65f-5f3c-47fa-964a-f188158f77bc" (UID: "67e9a65f-5f3c-47fa-964a-f188158f77bc"). InnerVolumeSpecName "kube-api-access-c2h5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.391012 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz" (OuterVolumeSpecName: "kube-api-access-bccpz") pod "62cc8598-cf68-4bb3-b272-ab87683edf6b" (UID: "62cc8598-cf68-4bb3-b272-ab87683edf6b"). InnerVolumeSpecName "kube-api-access-bccpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.391643 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc" (OuterVolumeSpecName: "kube-api-access-lqntc") pod "61467ee5-3ddb-4d7d-88d3-e48107c51338" (UID: "61467ee5-3ddb-4d7d-88d3-e48107c51338"). InnerVolumeSpecName "kube-api-access-lqntc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489495 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bccpz\" (UniqueName: \"kubernetes.io/projected/62cc8598-cf68-4bb3-b272-ab87683edf6b-kube-api-access-bccpz\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489527 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2h5f\" (UniqueName: \"kubernetes.io/projected/67e9a65f-5f3c-47fa-964a-f188158f77bc-kube-api-access-c2h5f\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489540 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61467ee5-3ddb-4d7d-88d3-e48107c51338-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489548 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9115000-6aab-492e-925f-f44a574b5009-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489558 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvgx6\" (UniqueName: \"kubernetes.io/projected/a9115000-6aab-492e-925f-f44a574b5009-kube-api-access-kvgx6\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.489566 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqntc\" (UniqueName: \"kubernetes.io/projected/61467ee5-3ddb-4d7d-88d3-e48107c51338-kube-api-access-lqntc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.942993 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"aafcef1f-4988-49d1-88f0-47a44d8f18fc","Type":"ContainerStarted","Data":"eee761b406be6fae2f4605379d0d78d6215ce6d5f66be1baaeaad7260a6e8d3b"} Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.946053 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-8c9d-account-create-rx52p" event={"ID":"67e9a65f-5f3c-47fa-964a-f188158f77bc","Type":"ContainerDied","Data":"535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc"} Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.946096 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535270068aeca1216bd5849fadfe99dc9b9ef66f0eeebf800dd87f61416a12fc" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.946334 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-8c9d-account-create-rx52p" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.947547 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-56ac-account-create-vvjww" Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.948110 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-56ac-account-create-vvjww" event={"ID":"61467ee5-3ddb-4d7d-88d3-e48107c51338","Type":"ContainerDied","Data":"e891d0f38e0e67de79accb16f3271da576a1f24875ee1f066380728547e213ac"} Nov 25 15:14:37 crc kubenswrapper[4806]: I1125 15:14:37.948136 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e891d0f38e0e67de79accb16f3271da576a1f24875ee1f066380728547e213ac" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.023598 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=32.023575154 podStartE2EDuration="32.023575154s" podCreationTimestamp="2025-11-25 15:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:38.004300675 +0000 UTC m=+1310.656443096" watchObservedRunningTime="2025-11-25 15:14:38.023575154 +0000 UTC m=+1310.675717565" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.025830 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"94ed0a0e3c508f5d46712304578ece997ea19cef4490260185d1833cd4012699"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.025864 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"bf6f4a7d3d5bd370142a75366b9844464310a35f74e2fa804fee7e9c39a065c9"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.025875 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"c9788b15f20573e37cf395c577a063aba94296b6d035bb2dee878353f1c29593"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.041051 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5265-account-create-vr75r" event={"ID":"62cc8598-cf68-4bb3-b272-ab87683edf6b","Type":"ContainerDied","Data":"571fa43d0d6b745cbcfa58b47a8a825f03d16f61189f223cac4e60b16b655554"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.041283 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="571fa43d0d6b745cbcfa58b47a8a825f03d16f61189f223cac4e60b16b655554" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.041382 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5265-account-create-vr75r" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.045478 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rknkz" event={"ID":"a9115000-6aab-492e-925f-f44a574b5009","Type":"ContainerDied","Data":"264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.045514 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="264e3629abd90caeaa9ff9c07b3069c624401bf62176523ced69ebf60c2a90f8" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.045577 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rknkz" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.053659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-vrlpg" event={"ID":"5eccd330-3d33-48e3-929b-2a67bb643af7","Type":"ContainerDied","Data":"9ee41f9330aec83a8e2329d48b0b91f0fc18a9c36b1c7438c6eea8b6f442af0c"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.053858 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ee41f9330aec83a8e2329d48b0b91f0fc18a9c36b1c7438c6eea8b6f442af0c" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.054018 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-vrlpg" Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.066666 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xnqxm" event={"ID":"634468c1-6446-422a-9816-b19afdf8858d","Type":"ContainerStarted","Data":"21b1e6a3dcb8fbafe003b9fa097d1bf3a9a766d92e86a62bfe3a3708f22473dd"} Nov 25 15:14:38 crc kubenswrapper[4806]: I1125 15:14:38.094657 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-xnqxm" podStartSLOduration=2.767214613 podStartE2EDuration="8.094639907s" podCreationTimestamp="2025-11-25 15:14:30 +0000 UTC" firstStartedPulling="2025-11-25 15:14:31.961194874 +0000 UTC m=+1304.613337285" lastFinishedPulling="2025-11-25 15:14:37.288620158 +0000 UTC m=+1309.940762579" observedRunningTime="2025-11-25 15:14:38.082597084 +0000 UTC m=+1310.734739495" watchObservedRunningTime="2025-11-25 15:14:38.094639907 +0000 UTC m=+1310.746782318" Nov 25 15:14:38 crc kubenswrapper[4806]: E1125 15:14:38.698264 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a870706_cfbf_4cea_a993_238c06b56be3.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:14:39 crc kubenswrapper[4806]: I1125 15:14:39.080216 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"3adab0b0ae6bc09c917b7890be7f7c5a2ca3ae43dc9ee4e605609348b18ba9af"} Nov 25 15:14:39 crc kubenswrapper[4806]: I1125 15:14:39.080559 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"af8ce657db937bc617e74f5dc32c186fc7929db9f3a20ce86a95983e4b314b4f"} Nov 25 15:14:40 crc kubenswrapper[4806]: I1125 15:14:40.101045 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"3f89d43ec0cf6a39ab86d0dc13d63c21e9aae5b9266ac08719a15875a2988c50"} Nov 25 15:14:41 crc kubenswrapper[4806]: I1125 15:14:41.114685 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"837cf2fb-8640-4ac3-ad91-84ff1dba54e6","Type":"ContainerStarted","Data":"96dbec5c47e83f61dd91ec903684ae0c3e0a9adb12e59c3bbbe730b27e33ad8c"} Nov 25 15:14:41 crc kubenswrapper[4806]: I1125 15:14:41.815960 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.124402 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n88tp" event={"ID":"e7e521a6-108d-45db-ad10-42e394a9cd1a","Type":"ContainerStarted","Data":"706f4aa3780c37be61f5872cab7a0bd985ca6ac579fc96ba25423056c7cce6d8"} Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.177546 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=44.019557926 podStartE2EDuration="57.17752817s" podCreationTimestamp="2025-11-25 15:13:45 +0000 UTC" firstStartedPulling="2025-11-25 15:14:21.385738164 +0000 UTC m=+1294.037880575" lastFinishedPulling="2025-11-25 15:14:34.543708408 +0000 UTC m=+1307.195850819" observedRunningTime="2025-11-25 15:14:42.17297599 +0000 UTC m=+1314.825118431" watchObservedRunningTime="2025-11-25 15:14:42.17752817 +0000 UTC m=+1314.829670581" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.202858 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-n88tp" podStartSLOduration=2.808691149 podStartE2EDuration="44.20283522s" podCreationTimestamp="2025-11-25 15:13:58 +0000 UTC" firstStartedPulling="2025-11-25 15:13:59.81659199 +0000 UTC m=+1272.468734401" lastFinishedPulling="2025-11-25 15:14:41.210736061 +0000 UTC m=+1313.862878472" observedRunningTime="2025-11-25 15:14:42.197472598 +0000 UTC m=+1314.849615009" watchObservedRunningTime="2025-11-25 15:14:42.20283522 +0000 UTC m=+1314.854977631" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.517715 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518117 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61467ee5-3ddb-4d7d-88d3-e48107c51338" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518134 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="61467ee5-3ddb-4d7d-88d3-e48107c51338" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518150 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518156 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518169 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eccd330-3d33-48e3-929b-2a67bb643af7" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518176 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eccd330-3d33-48e3-929b-2a67bb643af7" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518188 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cc8598-cf68-4bb3-b272-ab87683edf6b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518193 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cc8598-cf68-4bb3-b272-ab87683edf6b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518207 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e37330-4341-48fc-b9d5-bd0403e6237a" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518213 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e37330-4341-48fc-b9d5-bd0403e6237a" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518222 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9115000-6aab-492e-925f-f44a574b5009" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518228 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9115000-6aab-492e-925f-f44a574b5009" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518250 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94278b3c-2207-463b-9700-e8ab16c72b5b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518256 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="94278b3c-2207-463b-9700-e8ab16c72b5b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: E1125 15:14:42.518268 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67e9a65f-5f3c-47fa-964a-f188158f77bc" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518274 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67e9a65f-5f3c-47fa-964a-f188158f77bc" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518445 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="67e9a65f-5f3c-47fa-964a-f188158f77bc" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518458 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="94278b3c-2207-463b-9700-e8ab16c72b5b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518469 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e37330-4341-48fc-b9d5-bd0403e6237a" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518476 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518487 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="61467ee5-3ddb-4d7d-88d3-e48107c51338" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518496 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9115000-6aab-492e-925f-f44a574b5009" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518504 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cc8598-cf68-4bb3-b272-ab87683edf6b" containerName="mariadb-account-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.518513 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eccd330-3d33-48e3-929b-2a67bb643af7" containerName="mariadb-database-create" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.525233 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.527449 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.536506 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604144 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhw9z\" (UniqueName: \"kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604293 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604320 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604360 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.604513 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706069 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706122 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhw9z\" (UniqueName: \"kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706184 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.706227 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.707015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.707051 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.707053 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.707254 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.707370 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.733818 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhw9z\" (UniqueName: \"kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z\") pod \"dnsmasq-dns-764c5664d7-hk7c8\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:42 crc kubenswrapper[4806]: I1125 15:14:42.845755 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:43 crc kubenswrapper[4806]: I1125 15:14:43.393922 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:14:44 crc kubenswrapper[4806]: I1125 15:14:44.153790 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerID="aadaac0b50b3f69ecc9c13edb3a6bbd2065e3a51ad3a3425e208f568df6f9b5f" exitCode=0 Nov 25 15:14:44 crc kubenswrapper[4806]: I1125 15:14:44.153866 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" event={"ID":"5e6e7521-889f-47b4-84d3-0437b1a844f2","Type":"ContainerDied","Data":"aadaac0b50b3f69ecc9c13edb3a6bbd2065e3a51ad3a3425e208f568df6f9b5f"} Nov 25 15:14:44 crc kubenswrapper[4806]: I1125 15:14:44.154313 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" event={"ID":"5e6e7521-889f-47b4-84d3-0437b1a844f2","Type":"ContainerStarted","Data":"0958d3651e07f4eea8ce01f9b2533a65cccada4c8b77b00b430c8c005501cd5a"} Nov 25 15:14:45 crc kubenswrapper[4806]: I1125 15:14:45.163837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" event={"ID":"5e6e7521-889f-47b4-84d3-0437b1a844f2","Type":"ContainerStarted","Data":"e7da21c825b3a79732a3eb3454f858319557707db5281002204c2f7990df1bc2"} Nov 25 15:14:45 crc kubenswrapper[4806]: I1125 15:14:45.164245 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:45 crc kubenswrapper[4806]: I1125 15:14:45.190067 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podStartSLOduration=3.190046922 podStartE2EDuration="3.190046922s" podCreationTimestamp="2025-11-25 15:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:45.181196711 +0000 UTC m=+1317.833339142" watchObservedRunningTime="2025-11-25 15:14:45.190046922 +0000 UTC m=+1317.842189333" Nov 25 15:14:51 crc kubenswrapper[4806]: I1125 15:14:51.231512 4806 generic.go:334] "Generic (PLEG): container finished" podID="634468c1-6446-422a-9816-b19afdf8858d" containerID="21b1e6a3dcb8fbafe003b9fa097d1bf3a9a766d92e86a62bfe3a3708f22473dd" exitCode=0 Nov 25 15:14:51 crc kubenswrapper[4806]: I1125 15:14:51.231593 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xnqxm" event={"ID":"634468c1-6446-422a-9816-b19afdf8858d","Type":"ContainerDied","Data":"21b1e6a3dcb8fbafe003b9fa097d1bf3a9a766d92e86a62bfe3a3708f22473dd"} Nov 25 15:14:51 crc kubenswrapper[4806]: I1125 15:14:51.815794 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:51 crc kubenswrapper[4806]: I1125 15:14:51.821765 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.246592 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.642391 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.721395 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle\") pod \"634468c1-6446-422a-9816-b19afdf8858d\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.721533 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxxhk\" (UniqueName: \"kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk\") pod \"634468c1-6446-422a-9816-b19afdf8858d\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.721635 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data\") pod \"634468c1-6446-422a-9816-b19afdf8858d\" (UID: \"634468c1-6446-422a-9816-b19afdf8858d\") " Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.738174 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk" (OuterVolumeSpecName: "kube-api-access-rxxhk") pod "634468c1-6446-422a-9816-b19afdf8858d" (UID: "634468c1-6446-422a-9816-b19afdf8858d"). InnerVolumeSpecName "kube-api-access-rxxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.761533 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "634468c1-6446-422a-9816-b19afdf8858d" (UID: "634468c1-6446-422a-9816-b19afdf8858d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.788381 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data" (OuterVolumeSpecName: "config-data") pod "634468c1-6446-422a-9816-b19afdf8858d" (UID: "634468c1-6446-422a-9816-b19afdf8858d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.823983 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.824025 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxxhk\" (UniqueName: \"kubernetes.io/projected/634468c1-6446-422a-9816-b19afdf8858d-kube-api-access-rxxhk\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.824035 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/634468c1-6446-422a-9816-b19afdf8858d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.846476 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.903345 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:14:52 crc kubenswrapper[4806]: I1125 15:14:52.903843 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-pxfdb" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="dnsmasq-dns" containerID="cri-o://cdd6a05c85039d7ad6147b2ac34e0a0d1ac12892e80d251e30c81fe0e810056d" gracePeriod=10 Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.248907 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-xnqxm" event={"ID":"634468c1-6446-422a-9816-b19afdf8858d","Type":"ContainerDied","Data":"60f99f71d296653822d2830475de1ef31f3a5752ba201c70de80713161f1e805"} Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.248933 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-xnqxm" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.248940 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60f99f71d296653822d2830475de1ef31f3a5752ba201c70de80713161f1e805" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.250690 4806 generic.go:334] "Generic (PLEG): container finished" podID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerID="cdd6a05c85039d7ad6147b2ac34e0a0d1ac12892e80d251e30c81fe0e810056d" exitCode=0 Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.250771 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pxfdb" event={"ID":"291eadf5-e50c-453d-aaf5-5fe457dae267","Type":"ContainerDied","Data":"cdd6a05c85039d7ad6147b2ac34e0a0d1ac12892e80d251e30c81fe0e810056d"} Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.433208 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:53 crc kubenswrapper[4806]: E1125 15:14:53.438932 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="634468c1-6446-422a-9816-b19afdf8858d" containerName="keystone-db-sync" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.439018 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="634468c1-6446-422a-9816-b19afdf8858d" containerName="keystone-db-sync" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.439281 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="634468c1-6446-422a-9816-b19afdf8858d" containerName="keystone-db-sync" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.440428 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.442629 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.512649 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-7qnjn"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.514221 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.518631 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.518727 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.518929 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.519060 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.519774 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nmg8l" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538418 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5zw4\" (UniqueName: \"kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538471 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538521 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7wpx\" (UniqueName: \"kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538543 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538627 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538650 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538770 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.538997 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.539205 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.547638 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7qnjn"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.641154 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.641531 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.641683 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.641829 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5zw4\" (UniqueName: \"kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.641943 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.642054 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7wpx\" (UniqueName: \"kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.642166 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.642308 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.642438 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.642958 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.643087 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.643247 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.644206 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.644969 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.646487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.647169 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.647621 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.655955 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.656768 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.656899 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.657008 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.675285 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.687853 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5zw4\" (UniqueName: \"kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4\") pod \"keystone-bootstrap-7qnjn\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.702810 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7wpx\" (UniqueName: \"kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx\") pod \"dnsmasq-dns-5959f8865f-dh7st\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.753379 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2nbxh"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.755055 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.759777 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6cfdz" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.760027 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.760192 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.771273 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.777545 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-drlb4"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.785897 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.792200 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.793457 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-dqwtc" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.794974 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.797371 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.814074 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2nbxh"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.865461 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.866732 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.866812 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdpx6\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.866848 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.866867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.866888 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.890887 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-drlb4"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.969145 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970777 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdpx6\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970907 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970941 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj58b\" (UniqueName: \"kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.970999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.971033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.971173 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.975336 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.984218 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:14:53 crc kubenswrapper[4806]: I1125 15:14:53.984677 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:53.989915 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:53.992167 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.000387 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7lfx4"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.001983 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.010914 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.018604 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.019226 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.019443 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqsxx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.019660 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.036163 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdpx6\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6\") pod \"cloudkitty-db-sync-drlb4\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.071122 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077210 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077252 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077284 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077416 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077479 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077493 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077525 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077545 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077625 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwkpf\" (UniqueName: \"kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077671 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077758 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj58b\" (UniqueName: \"kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077776 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077811 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077850 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077873 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.077923 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2h2c\" (UniqueName: \"kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.129402 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.151748 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-n7cnj"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.160575 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.166714 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.177100 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.177588 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-trp2w" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.177767 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179537 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7lfx4"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179829 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179870 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179921 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179947 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.179971 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2h2c\" (UniqueName: \"kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180003 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180027 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180044 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180072 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180142 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180159 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180179 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180214 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmg27\" (UniqueName: \"kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.180273 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwkpf\" (UniqueName: \"kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.207140 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.217710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.220660 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.225129 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.240077 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj58b\" (UniqueName: \"kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b\") pod \"neutron-db-sync-2nbxh\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.271117 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-n7cnj"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.274878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.289758 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwkpf\" (UniqueName: \"kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.311019 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.314470 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.326811 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.328086 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.331684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.331821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.331892 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmg27\" (UniqueName: \"kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.335067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.342989 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2h2c\" (UniqueName: \"kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c\") pod \"cinder-db-sync-7lfx4\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.344022 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.351098 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.365092 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmg27\" (UniqueName: \"kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.375062 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.376488 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data\") pod \"barbican-db-sync-n7cnj\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.388125 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.389745 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.476677 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.478455 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.496258 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.522739 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-bqhxc"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.524780 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.529802 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.530034 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8vrnm" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.530713 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.537378 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bqhxc"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538501 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538587 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538647 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538665 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhcr5\" (UniqueName: \"kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.538752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.628578 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.628721 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.629152 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.639918 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x7js\" (UniqueName: \"kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.639976 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640007 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640029 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640049 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640065 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhcr5\" (UniqueName: \"kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640128 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640145 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640178 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640199 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.640219 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.641019 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.641558 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.642057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.642443 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.642604 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.690364 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhcr5\" (UniqueName: \"kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5\") pod \"dnsmasq-dns-58dd9ff6bc-zjmcx\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.741869 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb\") pod \"291eadf5-e50c-453d-aaf5-5fe457dae267\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.741921 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config\") pod \"291eadf5-e50c-453d-aaf5-5fe457dae267\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.741973 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6js4\" (UniqueName: \"kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4\") pod \"291eadf5-e50c-453d-aaf5-5fe457dae267\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742108 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc\") pod \"291eadf5-e50c-453d-aaf5-5fe457dae267\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742263 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb\") pod \"291eadf5-e50c-453d-aaf5-5fe457dae267\" (UID: \"291eadf5-e50c-453d-aaf5-5fe457dae267\") " Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742486 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742599 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742642 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.742704 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x7js\" (UniqueName: \"kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.743298 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.746625 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.758939 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.774935 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4" (OuterVolumeSpecName: "kube-api-access-t6js4") pod "291eadf5-e50c-453d-aaf5-5fe457dae267" (UID: "291eadf5-e50c-453d-aaf5-5fe457dae267"). InnerVolumeSpecName "kube-api-access-t6js4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.781822 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.801246 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x7js\" (UniqueName: \"kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js\") pod \"placement-db-sync-bqhxc\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.815828 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.835145 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "291eadf5-e50c-453d-aaf5-5fe457dae267" (UID: "291eadf5-e50c-453d-aaf5-5fe457dae267"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.846747 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.847025 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6js4\" (UniqueName: \"kubernetes.io/projected/291eadf5-e50c-453d-aaf5-5fe457dae267-kube-api-access-t6js4\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.848548 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "291eadf5-e50c-453d-aaf5-5fe457dae267" (UID: "291eadf5-e50c-453d-aaf5-5fe457dae267"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.858035 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bqhxc" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.867686 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.891168 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-7qnjn"] Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.896412 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "291eadf5-e50c-453d-aaf5-5fe457dae267" (UID: "291eadf5-e50c-453d-aaf5-5fe457dae267"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.914687 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config" (OuterVolumeSpecName: "config") pod "291eadf5-e50c-453d-aaf5-5fe457dae267" (UID: "291eadf5-e50c-453d-aaf5-5fe457dae267"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.950259 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.950963 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:54 crc kubenswrapper[4806]: I1125 15:14:54.951051 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/291eadf5-e50c-453d-aaf5-5fe457dae267-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.134568 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-drlb4"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.368126 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" event={"ID":"8aace553-74e7-4dd9-83d6-3c565a18a3f9","Type":"ContainerStarted","Data":"59c16c4ab16d0129d420f433f9bf310583999cb1e74cdd884985ee485d2f698a"} Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.373724 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qnjn" event={"ID":"e0ceb758-17b6-4a0e-9851-05d1ef8a8011","Type":"ContainerStarted","Data":"d60798c78348c6fc968800f380ddbae5f38ad98c3368ccef5658e90ee2ec8661"} Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.396982 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2nbxh"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.399999 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-pxfdb" Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.403204 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-pxfdb" event={"ID":"291eadf5-e50c-453d-aaf5-5fe457dae267","Type":"ContainerDied","Data":"adb469f98b7215665ee71b941e37cbb224442fea665edb7225dd98c1b0b4cb68"} Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.403287 4806 scope.go:117] "RemoveContainer" containerID="cdd6a05c85039d7ad6147b2ac34e0a0d1ac12892e80d251e30c81fe0e810056d" Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.411904 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-drlb4" event={"ID":"c2503ad9-21ed-44c9-ae5a-25307c751865","Type":"ContainerStarted","Data":"0df96551f7544682e32e2cfb8cee323c6ae5223a7c0e1683a576d619965104d5"} Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.418699 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7lfx4"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.460688 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.470677 4806 scope.go:117] "RemoveContainer" containerID="b7e4ea7871c6858ccfa35f358a16e2a49f824439a48893e21369dc071b798dc9" Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.478238 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-pxfdb"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.517061 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-n7cnj"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.534152 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.705998 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-bqhxc"] Nov 25 15:14:55 crc kubenswrapper[4806]: I1125 15:14:55.750990 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:14:55 crc kubenswrapper[4806]: W1125 15:14:55.786037 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda58a488e_b4cb_42cb_8bc4_4a467bbb5dd1.slice/crio-5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460 WatchSource:0}: Error finding container 5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460: Status 404 returned error can't find the container with id 5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460 Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.106171 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" path="/var/lib/kubelet/pods/291eadf5-e50c-453d-aaf5-5fe457dae267/volumes" Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.458528 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qnjn" event={"ID":"e0ceb758-17b6-4a0e-9851-05d1ef8a8011","Type":"ContainerStarted","Data":"d2e8d957dc50def02fcf69ce74a661d19b9438bf2106f3a93657490e54d7ca52"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.504052 4806 generic.go:334] "Generic (PLEG): container finished" podID="7463281f-ab54-4849-861d-045b2a1a848c" containerID="45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae" exitCode=0 Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.504186 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" event={"ID":"7463281f-ab54-4849-861d-045b2a1a848c","Type":"ContainerDied","Data":"45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.504216 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" event={"ID":"7463281f-ab54-4849-861d-045b2a1a848c","Type":"ContainerStarted","Data":"84bae419bdf8c6d52d5d6f280eb48330a0161f2a9fb11ef5129985634fee7ae3"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.521689 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-7qnjn" podStartSLOduration=3.521671187 podStartE2EDuration="3.521671187s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:56.50456551 +0000 UTC m=+1329.156707921" watchObservedRunningTime="2025-11-25 15:14:56.521671187 +0000 UTC m=+1329.173813588" Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.532114 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerStarted","Data":"45738b562eba55c1fd17715c5bccb9dae6c74b8c79d040ee3674498a4ae18e94"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.662532 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bqhxc" event={"ID":"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1","Type":"ContainerStarted","Data":"5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.705807 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n7cnj" event={"ID":"08c00715-2142-4aef-ae81-16ce4c5cba4d","Type":"ContainerStarted","Data":"7926d9558a9cb1051bd34810b8ec00767fe819ba02188c8fe90e280733436516"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.718662 4806 generic.go:334] "Generic (PLEG): container finished" podID="8aace553-74e7-4dd9-83d6-3c565a18a3f9" containerID="5332b36fbe1a4738b409b7ddcbe58b7043e3bed7b3d8ab1a368a32856f36931c" exitCode=0 Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.718761 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" event={"ID":"8aace553-74e7-4dd9-83d6-3c565a18a3f9","Type":"ContainerDied","Data":"5332b36fbe1a4738b409b7ddcbe58b7043e3bed7b3d8ab1a368a32856f36931c"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.807616 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2nbxh" event={"ID":"11aeb498-3614-4aac-a381-9bf0392cf5dc","Type":"ContainerStarted","Data":"75770c80babeeaf1288bbb487b06acbdab84838b6b68416b9d71444427565ed5"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.808156 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2nbxh" event={"ID":"11aeb498-3614-4aac-a381-9bf0392cf5dc","Type":"ContainerStarted","Data":"32c7c20aa28fc9ac181486c9b4208af5c2d88463e98816af4b283b5a9ce19b53"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.831637 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7lfx4" event={"ID":"a2e7e600-c1a4-4bda-910b-c11fe9411cc9","Type":"ContainerStarted","Data":"83130513a7ececfea63da7746ce67fe88ee9c313b8642698d7ff2e80a6e98ac4"} Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.898285 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2nbxh" podStartSLOduration=3.898254915 podStartE2EDuration="3.898254915s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:56.854129618 +0000 UTC m=+1329.506272029" watchObservedRunningTime="2025-11-25 15:14:56.898254915 +0000 UTC m=+1329.550397316" Nov 25 15:14:56 crc kubenswrapper[4806]: I1125 15:14:56.925905 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.318605 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.456929 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.457003 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.457043 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.457182 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.457200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.457371 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7wpx\" (UniqueName: \"kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx\") pod \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\" (UID: \"8aace553-74e7-4dd9-83d6-3c565a18a3f9\") " Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.475549 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx" (OuterVolumeSpecName: "kube-api-access-p7wpx") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "kube-api-access-p7wpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.487371 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.488469 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.533887 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config" (OuterVolumeSpecName: "config") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.535685 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.559948 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7wpx\" (UniqueName: \"kubernetes.io/projected/8aace553-74e7-4dd9-83d6-3c565a18a3f9-kube-api-access-p7wpx\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.560000 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.560045 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.560056 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.560068 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.568113 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8aace553-74e7-4dd9-83d6-3c565a18a3f9" (UID: "8aace553-74e7-4dd9-83d6-3c565a18a3f9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.664840 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8aace553-74e7-4dd9-83d6-3c565a18a3f9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.857682 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" event={"ID":"7463281f-ab54-4849-861d-045b2a1a848c","Type":"ContainerStarted","Data":"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06"} Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.857782 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.863080 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" event={"ID":"8aace553-74e7-4dd9-83d6-3c565a18a3f9","Type":"ContainerDied","Data":"59c16c4ab16d0129d420f433f9bf310583999cb1e74cdd884985ee485d2f698a"} Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.863139 4806 scope.go:117] "RemoveContainer" containerID="5332b36fbe1a4738b409b7ddcbe58b7043e3bed7b3d8ab1a368a32856f36931c" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.864089 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-dh7st" Nov 25 15:14:57 crc kubenswrapper[4806]: I1125 15:14:57.895585 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" podStartSLOduration=3.895549033 podStartE2EDuration="3.895549033s" podCreationTimestamp="2025-11-25 15:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:57.885546028 +0000 UTC m=+1330.537688459" watchObservedRunningTime="2025-11-25 15:14:57.895549033 +0000 UTC m=+1330.547691464" Nov 25 15:14:58 crc kubenswrapper[4806]: I1125 15:14:58.039455 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:58 crc kubenswrapper[4806]: I1125 15:14:58.050211 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-dh7st"] Nov 25 15:14:58 crc kubenswrapper[4806]: I1125 15:14:58.126045 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8aace553-74e7-4dd9-83d6-3c565a18a3f9" path="/var/lib/kubelet/pods/8aace553-74e7-4dd9-83d6-3c565a18a3f9/volumes" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.148749 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9"] Nov 25 15:15:00 crc kubenswrapper[4806]: E1125 15:15:00.149704 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8aace553-74e7-4dd9-83d6-3c565a18a3f9" containerName="init" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.149719 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8aace553-74e7-4dd9-83d6-3c565a18a3f9" containerName="init" Nov 25 15:15:00 crc kubenswrapper[4806]: E1125 15:15:00.149741 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="init" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.149747 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="init" Nov 25 15:15:00 crc kubenswrapper[4806]: E1125 15:15:00.149767 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="dnsmasq-dns" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.149772 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="dnsmasq-dns" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.149965 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8aace553-74e7-4dd9-83d6-3c565a18a3f9" containerName="init" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.149978 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="291eadf5-e50c-453d-aaf5-5fe457dae267" containerName="dnsmasq-dns" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.150673 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.158376 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.158401 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.184791 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9"] Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.239882 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.253617 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf2vx\" (UniqueName: \"kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.254199 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.356038 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.356405 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf2vx\" (UniqueName: \"kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.356519 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.357536 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.367913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.383307 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf2vx\" (UniqueName: \"kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx\") pod \"collect-profiles-29401395-8j2s9\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:00 crc kubenswrapper[4806]: I1125 15:15:00.493023 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.143820 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9"] Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.957717 4806 generic.go:334] "Generic (PLEG): container finished" podID="e0ceb758-17b6-4a0e-9851-05d1ef8a8011" containerID="d2e8d957dc50def02fcf69ce74a661d19b9438bf2106f3a93657490e54d7ca52" exitCode=0 Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.957991 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qnjn" event={"ID":"e0ceb758-17b6-4a0e-9851-05d1ef8a8011","Type":"ContainerDied","Data":"d2e8d957dc50def02fcf69ce74a661d19b9438bf2106f3a93657490e54d7ca52"} Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.970493 4806 generic.go:334] "Generic (PLEG): container finished" podID="98013fa5-ca9f-4800-a63d-be400f825cfa" containerID="92b1560a0160a0d3cf2c66a71c51cada54dd161ae8d2df4d754c10b24706499f" exitCode=0 Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.970538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" event={"ID":"98013fa5-ca9f-4800-a63d-be400f825cfa","Type":"ContainerDied","Data":"92b1560a0160a0d3cf2c66a71c51cada54dd161ae8d2df4d754c10b24706499f"} Nov 25 15:15:01 crc kubenswrapper[4806]: I1125 15:15:01.970562 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" event={"ID":"98013fa5-ca9f-4800-a63d-be400f825cfa","Type":"ContainerStarted","Data":"89d076d66933052252a27a3573a9d826be213b89107150e8b168000c5efd1988"} Nov 25 15:15:04 crc kubenswrapper[4806]: I1125 15:15:04.818643 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:15:04 crc kubenswrapper[4806]: I1125 15:15:04.909060 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:15:04 crc kubenswrapper[4806]: I1125 15:15:04.909393 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" containerID="cri-o://e7da21c825b3a79732a3eb3454f858319557707db5281002204c2f7990df1bc2" gracePeriod=10 Nov 25 15:15:06 crc kubenswrapper[4806]: I1125 15:15:06.017886 4806 generic.go:334] "Generic (PLEG): container finished" podID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerID="e7da21c825b3a79732a3eb3454f858319557707db5281002204c2f7990df1bc2" exitCode=0 Nov 25 15:15:06 crc kubenswrapper[4806]: I1125 15:15:06.017947 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" event={"ID":"5e6e7521-889f-47b4-84d3-0437b1a844f2","Type":"ContainerDied","Data":"e7da21c825b3a79732a3eb3454f858319557707db5281002204c2f7990df1bc2"} Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.660183 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.666858 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.730986 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731128 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume\") pod \"98013fa5-ca9f-4800-a63d-be400f825cfa\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731250 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume\") pod \"98013fa5-ca9f-4800-a63d-be400f825cfa\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731337 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731366 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2vx\" (UniqueName: \"kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx\") pod \"98013fa5-ca9f-4800-a63d-be400f825cfa\" (UID: \"98013fa5-ca9f-4800-a63d-be400f825cfa\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731393 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731429 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5zw4\" (UniqueName: \"kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.731445 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data\") pod \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\" (UID: \"e0ceb758-17b6-4a0e-9851-05d1ef8a8011\") " Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.732827 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume" (OuterVolumeSpecName: "config-volume") pod "98013fa5-ca9f-4800-a63d-be400f825cfa" (UID: "98013fa5-ca9f-4800-a63d-be400f825cfa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.739016 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98013fa5-ca9f-4800-a63d-be400f825cfa" (UID: "98013fa5-ca9f-4800-a63d-be400f825cfa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.740920 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4" (OuterVolumeSpecName: "kube-api-access-h5zw4") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "kube-api-access-h5zw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.741744 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.742746 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.743745 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts" (OuterVolumeSpecName: "scripts") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.744152 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx" (OuterVolumeSpecName: "kube-api-access-vf2vx") pod "98013fa5-ca9f-4800-a63d-be400f825cfa" (UID: "98013fa5-ca9f-4800-a63d-be400f825cfa"). InnerVolumeSpecName "kube-api-access-vf2vx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.768870 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data" (OuterVolumeSpecName: "config-data") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.771055 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e0ceb758-17b6-4a0e-9851-05d1ef8a8011" (UID: "e0ceb758-17b6-4a0e-9851-05d1ef8a8011"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834073 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98013fa5-ca9f-4800-a63d-be400f825cfa-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834125 4806 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834136 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834145 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vf2vx\" (UniqueName: \"kubernetes.io/projected/98013fa5-ca9f-4800-a63d-be400f825cfa-kube-api-access-vf2vx\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834157 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834165 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5zw4\" (UniqueName: \"kubernetes.io/projected/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-kube-api-access-h5zw4\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834174 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834182 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0ceb758-17b6-4a0e-9851-05d1ef8a8011-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.834194 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98013fa5-ca9f-4800-a63d-be400f825cfa-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:07 crc kubenswrapper[4806]: I1125 15:15:07.847002 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.050863 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-7qnjn" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.050893 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-7qnjn" event={"ID":"e0ceb758-17b6-4a0e-9851-05d1ef8a8011","Type":"ContainerDied","Data":"d60798c78348c6fc968800f380ddbae5f38ad98c3368ccef5658e90ee2ec8661"} Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.050933 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d60798c78348c6fc968800f380ddbae5f38ad98c3368ccef5658e90ee2ec8661" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.053444 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" event={"ID":"98013fa5-ca9f-4800-a63d-be400f825cfa","Type":"ContainerDied","Data":"89d076d66933052252a27a3573a9d826be213b89107150e8b168000c5efd1988"} Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.053468 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89d076d66933052252a27a3573a9d826be213b89107150e8b168000c5efd1988" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.053541 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.781118 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-7qnjn"] Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.792219 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-7qnjn"] Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.877499 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fcs94"] Nov 25 15:15:08 crc kubenswrapper[4806]: E1125 15:15:08.877875 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ceb758-17b6-4a0e-9851-05d1ef8a8011" containerName="keystone-bootstrap" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.877891 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ceb758-17b6-4a0e-9851-05d1ef8a8011" containerName="keystone-bootstrap" Nov 25 15:15:08 crc kubenswrapper[4806]: E1125 15:15:08.877906 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98013fa5-ca9f-4800-a63d-be400f825cfa" containerName="collect-profiles" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.877912 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="98013fa5-ca9f-4800-a63d-be400f825cfa" containerName="collect-profiles" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.878094 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="98013fa5-ca9f-4800-a63d-be400f825cfa" containerName="collect-profiles" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.878116 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ceb758-17b6-4a0e-9851-05d1ef8a8011" containerName="keystone-bootstrap" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.879274 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.883248 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.883278 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.883420 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nmg8l" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.883544 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.905262 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fcs94"] Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.965830 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.965998 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.966065 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.966090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.966129 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gst6c\" (UniqueName: \"kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:08 crc kubenswrapper[4806]: I1125 15:15:08.966179 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068082 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068271 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068376 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068402 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.068448 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gst6c\" (UniqueName: \"kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.074249 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.074456 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.074719 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.074834 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.079879 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.094657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gst6c\" (UniqueName: \"kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c\") pod \"keystone-bootstrap-fcs94\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:09 crc kubenswrapper[4806]: I1125 15:15:09.206700 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:10 crc kubenswrapper[4806]: I1125 15:15:10.104770 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0ceb758-17b6-4a0e-9851-05d1ef8a8011" path="/var/lib/kubelet/pods/e0ceb758-17b6-4a0e-9851-05d1ef8a8011/volumes" Nov 25 15:15:12 crc kubenswrapper[4806]: I1125 15:15:12.846599 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Nov 25 15:15:17 crc kubenswrapper[4806]: I1125 15:15:17.846650 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Nov 25 15:15:17 crc kubenswrapper[4806]: I1125 15:15:17.847261 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:15:18 crc kubenswrapper[4806]: I1125 15:15:18.154291 4806 generic.go:334] "Generic (PLEG): container finished" podID="e7e521a6-108d-45db-ad10-42e394a9cd1a" containerID="706f4aa3780c37be61f5872cab7a0bd985ca6ac579fc96ba25423056c7cce6d8" exitCode=0 Nov 25 15:15:18 crc kubenswrapper[4806]: I1125 15:15:18.154418 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n88tp" event={"ID":"e7e521a6-108d-45db-ad10-42e394a9cd1a","Type":"ContainerDied","Data":"706f4aa3780c37be61f5872cab7a0bd985ca6ac579fc96ba25423056c7cce6d8"} Nov 25 15:15:22 crc kubenswrapper[4806]: I1125 15:15:22.846937 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.151:5353: connect: connection refused" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.068526 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n88tp" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.153940 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data\") pod \"e7e521a6-108d-45db-ad10-42e394a9cd1a\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.154245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle\") pod \"e7e521a6-108d-45db-ad10-42e394a9cd1a\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.154287 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data\") pod \"e7e521a6-108d-45db-ad10-42e394a9cd1a\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.154307 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gncc\" (UniqueName: \"kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc\") pod \"e7e521a6-108d-45db-ad10-42e394a9cd1a\" (UID: \"e7e521a6-108d-45db-ad10-42e394a9cd1a\") " Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.161036 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e7e521a6-108d-45db-ad10-42e394a9cd1a" (UID: "e7e521a6-108d-45db-ad10-42e394a9cd1a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.180066 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc" (OuterVolumeSpecName: "kube-api-access-9gncc") pod "e7e521a6-108d-45db-ad10-42e394a9cd1a" (UID: "e7e521a6-108d-45db-ad10-42e394a9cd1a"). InnerVolumeSpecName "kube-api-access-9gncc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.193541 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e521a6-108d-45db-ad10-42e394a9cd1a" (UID: "e7e521a6-108d-45db-ad10-42e394a9cd1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.214950 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data" (OuterVolumeSpecName: "config-data") pod "e7e521a6-108d-45db-ad10-42e394a9cd1a" (UID: "e7e521a6-108d-45db-ad10-42e394a9cd1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.226240 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-n88tp" event={"ID":"e7e521a6-108d-45db-ad10-42e394a9cd1a","Type":"ContainerDied","Data":"e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54"} Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.226290 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-n88tp" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.226306 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e05e684aca7a339946aefdafee726782d0134fc19edb029a8b1c5414d6970d54" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.257125 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.257175 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.257202 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e7e521a6-108d-45db-ad10-42e394a9cd1a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:24 crc kubenswrapper[4806]: I1125 15:15:24.257214 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gncc\" (UniqueName: \"kubernetes.io/projected/e7e521a6-108d-45db-ad10-42e394a9cd1a-kube-api-access-9gncc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:24 crc kubenswrapper[4806]: E1125 15:15:24.547875 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 25 15:15:24 crc kubenswrapper[4806]: E1125 15:15:24.548112 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66bh65fh564h658h668h675h8ch57h64h5d6h674h649h64chd6hf4hffh544h645h6h5bfh5b8h74h587h5c7h54bhc4h94h57h656h94h687h595q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwkpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f1b5c22d-b872-4857-b36c-5441ed9dfc9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.607624 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:15:25 crc kubenswrapper[4806]: E1125 15:15:25.614532 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" containerName="glance-db-sync" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.614567 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" containerName="glance-db-sync" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.614805 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" containerName="glance-db-sync" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.616153 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.661285 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.691845 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkw6x\" (UniqueName: \"kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.691935 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.691970 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.692088 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.692111 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.692147 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793629 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793680 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793753 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793771 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793802 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.793852 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkw6x\" (UniqueName: \"kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.795034 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.795951 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.795957 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.796096 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.796200 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.821280 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkw6x\" (UniqueName: \"kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x\") pod \"dnsmasq-dns-785d8bcb8c-klt6q\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:25 crc kubenswrapper[4806]: I1125 15:15:25.955221 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.624544 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.626551 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.628601 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.628946 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s7t8r" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.634776 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.647059 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.811641 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.811778 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.811856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr99n\" (UniqueName: \"kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.811921 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.811993 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.812069 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.812116 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.845890 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.848109 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.850478 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.856291 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913776 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913846 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr99n\" (UniqueName: \"kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913877 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913927 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913949 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.913977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.914020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.914568 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.914798 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.917514 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.917571 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0d2c8bd947cd04e33b263736a5e66dc40906178a29bfc8a7e651131070b0df8/globalmount\"" pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.921184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.927281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.930288 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.940897 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr99n\" (UniqueName: \"kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4806]: I1125 15:15:26.988724 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.015969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016021 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016056 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b7sx\" (UniqueName: \"kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016121 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016200 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016230 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.016257 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.034152 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.117244 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.117352 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.117564 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhw9z\" (UniqueName: \"kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.117587 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.118739 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.118816 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc\") pod \"5e6e7521-889f-47b4-84d3-0437b1a844f2\" (UID: \"5e6e7521-889f-47b4-84d3-0437b1a844f2\") " Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119236 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119306 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119380 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119451 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119491 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119548 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b7sx\" (UniqueName: \"kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.119665 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.120071 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.120083 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.135653 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.135928 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8638b1ae13d11aa578ec8268990588ab56d879a16e582695b5a3249a11d12f4b/globalmount\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.136192 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.138539 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.139967 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b7sx\" (UniqueName: \"kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.141682 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.142984 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z" (OuterVolumeSpecName: "kube-api-access-jhw9z") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "kube-api-access-jhw9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.193232 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config" (OuterVolumeSpecName: "config") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.201792 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.213778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.220376 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.223540 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.223586 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhw9z\" (UniqueName: \"kubernetes.io/projected/5e6e7521-889f-47b4-84d3-0437b1a844f2-kube-api-access-jhw9z\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.223606 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.223604 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.223620 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.226834 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e6e7521-889f-47b4-84d3-0437b1a844f2" (UID: "5e6e7521-889f-47b4-84d3-0437b1a844f2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.263089 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" event={"ID":"5e6e7521-889f-47b4-84d3-0437b1a844f2","Type":"ContainerDied","Data":"0958d3651e07f4eea8ce01f9b2533a65cccada4c8b77b00b430c8c005501cd5a"} Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.263164 4806 scope.go:117] "RemoveContainer" containerID="e7da21c825b3a79732a3eb3454f858319557707db5281002204c2f7990df1bc2" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.263176 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-hk7c8" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.271930 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.325515 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.325561 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e6e7521-889f-47b4-84d3-0437b1a844f2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.327222 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.354632 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-hk7c8"] Nov 25 15:15:27 crc kubenswrapper[4806]: I1125 15:15:27.481597 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:28 crc kubenswrapper[4806]: E1125 15:15:28.072678 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 15:15:28 crc kubenswrapper[4806]: E1125 15:15:28.073240 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2h2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7lfx4_openstack(a2e7e600-c1a4-4bda-910b-c11fe9411cc9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:15:28 crc kubenswrapper[4806]: E1125 15:15:28.074815 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7lfx4" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" Nov 25 15:15:28 crc kubenswrapper[4806]: I1125 15:15:28.107934 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" path="/var/lib/kubelet/pods/5e6e7521-889f-47b4-84d3-0437b1a844f2/volumes" Nov 25 15:15:28 crc kubenswrapper[4806]: E1125 15:15:28.279464 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-7lfx4" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" Nov 25 15:15:28 crc kubenswrapper[4806]: I1125 15:15:28.510359 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:28 crc kubenswrapper[4806]: I1125 15:15:28.573432 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:38 crc kubenswrapper[4806]: I1125 15:15:38.531637 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fcs94"] Nov 25 15:15:38 crc kubenswrapper[4806]: I1125 15:15:38.909898 4806 scope.go:117] "RemoveContainer" containerID="aadaac0b50b3f69ecc9c13edb3a6bbd2065e3a51ad3a3425e208f568df6f9b5f" Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.112113 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.112743 4806 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.112914 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdpx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-drlb4_openstack(c2503ad9-21ed-44c9-ae5a-25307c751865): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.114303 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-drlb4" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.418825 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcs94" event={"ID":"1ea45747-c756-4447-b140-e6bc10188ec3","Type":"ContainerStarted","Data":"1138faf56e19a39e67f489507496efc23083b69c6124b7c016a117a0b416c70c"} Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.427035 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n7cnj" event={"ID":"08c00715-2142-4aef-ae81-16ce4c5cba4d","Type":"ContainerStarted","Data":"2868621162c88a865d5cebb0a7e16b006a8fa6ffff07a11570251357df8e94f2"} Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.428021 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-drlb4" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.451603 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-bqhxc" podStartSLOduration=14.300781009 podStartE2EDuration="45.45157987s" podCreationTimestamp="2025-11-25 15:14:54 +0000 UTC" firstStartedPulling="2025-11-25 15:14:55.796532072 +0000 UTC m=+1328.448674493" lastFinishedPulling="2025-11-25 15:15:26.947330943 +0000 UTC m=+1359.599473354" observedRunningTime="2025-11-25 15:15:39.440547996 +0000 UTC m=+1372.092690437" watchObservedRunningTime="2025-11-25 15:15:39.45157987 +0000 UTC m=+1372.103722271" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.501705 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-n7cnj" podStartSLOduration=11.104766074 podStartE2EDuration="46.501680507s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="2025-11-25 15:14:55.583059561 +0000 UTC m=+1328.235201972" lastFinishedPulling="2025-11-25 15:15:30.979973984 +0000 UTC m=+1363.632116405" observedRunningTime="2025-11-25 15:15:39.478049474 +0000 UTC m=+1372.130191895" watchObservedRunningTime="2025-11-25 15:15:39.501680507 +0000 UTC m=+1372.153822918" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.547358 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-77qk4"] Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.547759 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.547772 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" Nov 25 15:15:39 crc kubenswrapper[4806]: E1125 15:15:39.547794 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="init" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.547801 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="init" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.547993 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6e7521-889f-47b4-84d3-0437b1a844f2" containerName="dnsmasq-dns" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.549425 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.559202 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77qk4"] Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.573549 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.657720 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.718713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-catalog-content\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.718941 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqb49\" (UniqueName: \"kubernetes.io/projected/19d636cf-e82d-48c3-82db-321f0505c5ab-kube-api-access-cqb49\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.719159 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-utilities\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.744662 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:39 crc kubenswrapper[4806]: W1125 15:15:39.780225 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod731faa0b_4d3c_4336_913d_e98fd4066184.slice/crio-f3e4c7c73d07351cda4fe06f67dce1019f028b3752374b2d0da1af2e52ffa6e1 WatchSource:0}: Error finding container f3e4c7c73d07351cda4fe06f67dce1019f028b3752374b2d0da1af2e52ffa6e1: Status 404 returned error can't find the container with id f3e4c7c73d07351cda4fe06f67dce1019f028b3752374b2d0da1af2e52ffa6e1 Nov 25 15:15:39 crc kubenswrapper[4806]: W1125 15:15:39.784237 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf71fa97_68bf_4b00_9072_da0445c8154b.slice/crio-43efd7a8bda47c914855f96cdcdcd32f0ccfa8926d83e7a29da59e36cd05751a WatchSource:0}: Error finding container 43efd7a8bda47c914855f96cdcdcd32f0ccfa8926d83e7a29da59e36cd05751a: Status 404 returned error can't find the container with id 43efd7a8bda47c914855f96cdcdcd32f0ccfa8926d83e7a29da59e36cd05751a Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.820684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-catalog-content\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.820825 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqb49\" (UniqueName: \"kubernetes.io/projected/19d636cf-e82d-48c3-82db-321f0505c5ab-kube-api-access-cqb49\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.820883 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-utilities\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.821200 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-catalog-content\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.821439 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19d636cf-e82d-48c3-82db-321f0505c5ab-utilities\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.842976 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqb49\" (UniqueName: \"kubernetes.io/projected/19d636cf-e82d-48c3-82db-321f0505c5ab-kube-api-access-cqb49\") pod \"redhat-operators-77qk4\" (UID: \"19d636cf-e82d-48c3-82db-321f0505c5ab\") " pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:39 crc kubenswrapper[4806]: I1125 15:15:39.882531 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.459603 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerStarted","Data":"f3e4c7c73d07351cda4fe06f67dce1019f028b3752374b2d0da1af2e52ffa6e1"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.472442 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerID="54dc37b5a2461fa63e90a1be9c0a479604e3351f13feee0880f676c2b6e42bdb" exitCode=0 Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.472505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" event={"ID":"f2488169-196d-4613-aa80-ab2e7a49bfa9","Type":"ContainerDied","Data":"54dc37b5a2461fa63e90a1be9c0a479604e3351f13feee0880f676c2b6e42bdb"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.472544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" event={"ID":"f2488169-196d-4613-aa80-ab2e7a49bfa9","Type":"ContainerStarted","Data":"8fd30357f368712095328ee6d80738fae74ee1ddcdd92120dac4a4727d1f83c9"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.490411 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerStarted","Data":"43efd7a8bda47c914855f96cdcdcd32f0ccfa8926d83e7a29da59e36cd05751a"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.493454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcs94" event={"ID":"1ea45747-c756-4447-b140-e6bc10188ec3","Type":"ContainerStarted","Data":"488d16663693ff36bf08ba56f9af112e7989574bba046f316154e3a2b8bf79b6"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.515333 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bqhxc" event={"ID":"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1","Type":"ContainerStarted","Data":"26856fbbbb17a66486678883159fe82fc8417d94000dd929bd71bdf008e1a237"} Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.546507 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fcs94" podStartSLOduration=32.546489929 podStartE2EDuration="32.546489929s" podCreationTimestamp="2025-11-25 15:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:40.541796505 +0000 UTC m=+1373.193938916" watchObservedRunningTime="2025-11-25 15:15:40.546489929 +0000 UTC m=+1373.198632330" Nov 25 15:15:40 crc kubenswrapper[4806]: I1125 15:15:40.631228 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77qk4"] Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.532632 4806 generic.go:334] "Generic (PLEG): container finished" podID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerID="b6285e5a61bdcaedfa8ec8b43346f5f463743a8cd5712fd9d7ac713250c01c7e" exitCode=0 Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.532964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77qk4" event={"ID":"19d636cf-e82d-48c3-82db-321f0505c5ab","Type":"ContainerDied","Data":"b6285e5a61bdcaedfa8ec8b43346f5f463743a8cd5712fd9d7ac713250c01c7e"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.533183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77qk4" event={"ID":"19d636cf-e82d-48c3-82db-321f0505c5ab","Type":"ContainerStarted","Data":"7727d66158f7a1e4d508b8419b502480662dfa8a4ad48f66a33aa4363a466ade"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.538600 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerStarted","Data":"6d2356a1e0452f7067c4abdc726be46b70eecf1c0c9429680163ed32225a78db"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.545999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7lfx4" event={"ID":"a2e7e600-c1a4-4bda-910b-c11fe9411cc9","Type":"ContainerStarted","Data":"bfce09d698f1f48b17a93b00e987a4e0e12f30f045ee8310782611fa29bbfac3"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.551586 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerStarted","Data":"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.556357 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerStarted","Data":"d85baef3894d5f22f8358c8e7c7e6b9c324710db0ccc8ec06687f4324d8984e9"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.564427 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" event={"ID":"f2488169-196d-4613-aa80-ab2e7a49bfa9","Type":"ContainerStarted","Data":"1b8587ca085823ad4e934da0be772c6b21961e0f5d192551cd2c84f53e600fdc"} Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.598332 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7lfx4" podStartSLOduration=4.09356766 podStartE2EDuration="48.59827122s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="2025-11-25 15:14:55.47065945 +0000 UTC m=+1328.122801871" lastFinishedPulling="2025-11-25 15:15:39.97536302 +0000 UTC m=+1372.627505431" observedRunningTime="2025-11-25 15:15:41.577849688 +0000 UTC m=+1374.229992119" watchObservedRunningTime="2025-11-25 15:15:41.59827122 +0000 UTC m=+1374.250413651" Nov 25 15:15:41 crc kubenswrapper[4806]: I1125 15:15:41.606647 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" podStartSLOduration=16.606610177 podStartE2EDuration="16.606610177s" podCreationTimestamp="2025-11-25 15:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:41.605075094 +0000 UTC m=+1374.257217525" watchObservedRunningTime="2025-11-25 15:15:41.606610177 +0000 UTC m=+1374.258752598" Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.577057 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerStarted","Data":"86d73c4b7c6494308bce7e9cf1b963d86d0ee2f1c80bf5773bc7059ab1df230c"} Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.577163 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-log" containerID="cri-o://d85baef3894d5f22f8358c8e7c7e6b9c324710db0ccc8ec06687f4324d8984e9" gracePeriod=30 Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.577273 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-httpd" containerID="cri-o://86d73c4b7c6494308bce7e9cf1b963d86d0ee2f1c80bf5773bc7059ab1df230c" gracePeriod=30 Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.582251 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerStarted","Data":"34b9de7dd4109125c7adc345a8c724f2c93cdc60f4a70092d7b2b509c761412a"} Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.582341 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.582412 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-log" containerID="cri-o://6d2356a1e0452f7067c4abdc726be46b70eecf1c0c9429680163ed32225a78db" gracePeriod=30 Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.582499 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-httpd" containerID="cri-o://34b9de7dd4109125c7adc345a8c724f2c93cdc60f4a70092d7b2b509c761412a" gracePeriod=30 Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.603952 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=17.603928646 podStartE2EDuration="17.603928646s" podCreationTimestamp="2025-11-25 15:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:42.597924155 +0000 UTC m=+1375.250066566" watchObservedRunningTime="2025-11-25 15:15:42.603928646 +0000 UTC m=+1375.256071057" Nov 25 15:15:42 crc kubenswrapper[4806]: I1125 15:15:42.644810 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=17.64478742 podStartE2EDuration="17.64478742s" podCreationTimestamp="2025-11-25 15:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:42.643869783 +0000 UTC m=+1375.296012204" watchObservedRunningTime="2025-11-25 15:15:42.64478742 +0000 UTC m=+1375.296929851" Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.598588 4806 generic.go:334] "Generic (PLEG): container finished" podID="731faa0b-4d3c-4336-913d-e98fd4066184" containerID="86d73c4b7c6494308bce7e9cf1b963d86d0ee2f1c80bf5773bc7059ab1df230c" exitCode=0 Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.598891 4806 generic.go:334] "Generic (PLEG): container finished" podID="731faa0b-4d3c-4336-913d-e98fd4066184" containerID="d85baef3894d5f22f8358c8e7c7e6b9c324710db0ccc8ec06687f4324d8984e9" exitCode=143 Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.598944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerDied","Data":"86d73c4b7c6494308bce7e9cf1b963d86d0ee2f1c80bf5773bc7059ab1df230c"} Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.598970 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerDied","Data":"d85baef3894d5f22f8358c8e7c7e6b9c324710db0ccc8ec06687f4324d8984e9"} Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.603024 4806 generic.go:334] "Generic (PLEG): container finished" podID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerID="34b9de7dd4109125c7adc345a8c724f2c93cdc60f4a70092d7b2b509c761412a" exitCode=0 Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.603052 4806 generic.go:334] "Generic (PLEG): container finished" podID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerID="6d2356a1e0452f7067c4abdc726be46b70eecf1c0c9429680163ed32225a78db" exitCode=143 Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.603989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerDied","Data":"34b9de7dd4109125c7adc345a8c724f2c93cdc60f4a70092d7b2b509c761412a"} Nov 25 15:15:43 crc kubenswrapper[4806]: I1125 15:15:43.604066 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerDied","Data":"6d2356a1e0452f7067c4abdc726be46b70eecf1c0c9429680163ed32225a78db"} Nov 25 15:15:44 crc kubenswrapper[4806]: I1125 15:15:44.615990 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ea45747-c756-4447-b140-e6bc10188ec3" containerID="488d16663693ff36bf08ba56f9af112e7989574bba046f316154e3a2b8bf79b6" exitCode=0 Nov 25 15:15:44 crc kubenswrapper[4806]: I1125 15:15:44.616113 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcs94" event={"ID":"1ea45747-c756-4447-b140-e6bc10188ec3","Type":"ContainerDied","Data":"488d16663693ff36bf08ba56f9af112e7989574bba046f316154e3a2b8bf79b6"} Nov 25 15:15:44 crc kubenswrapper[4806]: I1125 15:15:44.618712 4806 generic.go:334] "Generic (PLEG): container finished" podID="a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" containerID="26856fbbbb17a66486678883159fe82fc8417d94000dd929bd71bdf008e1a237" exitCode=0 Nov 25 15:15:44 crc kubenswrapper[4806]: I1125 15:15:44.618746 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bqhxc" event={"ID":"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1","Type":"ContainerDied","Data":"26856fbbbb17a66486678883159fe82fc8417d94000dd929bd71bdf008e1a237"} Nov 25 15:15:45 crc kubenswrapper[4806]: I1125 15:15:45.635674 4806 generic.go:334] "Generic (PLEG): container finished" podID="08c00715-2142-4aef-ae81-16ce4c5cba4d" containerID="2868621162c88a865d5cebb0a7e16b006a8fa6ffff07a11570251357df8e94f2" exitCode=0 Nov 25 15:15:45 crc kubenswrapper[4806]: I1125 15:15:45.635899 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n7cnj" event={"ID":"08c00715-2142-4aef-ae81-16ce4c5cba4d","Type":"ContainerDied","Data":"2868621162c88a865d5cebb0a7e16b006a8fa6ffff07a11570251357df8e94f2"} Nov 25 15:15:45 crc kubenswrapper[4806]: I1125 15:15:45.956519 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.022920 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.023263 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="dnsmasq-dns" containerID="cri-o://8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06" gracePeriod=10 Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.037125 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.061592 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.128826 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.128942 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.129016 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.129070 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.129131 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b7sx\" (UniqueName: \"kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.129258 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.129358 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run\") pod \"731faa0b-4d3c-4336-913d-e98fd4066184\" (UID: \"731faa0b-4d3c-4336-913d-e98fd4066184\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.132510 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.133515 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs" (OuterVolumeSpecName: "logs") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.136810 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.149077 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bqhxc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.155697 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts" (OuterVolumeSpecName: "scripts") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.166993 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx" (OuterVolumeSpecName: "kube-api-access-8b7sx") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "kube-api-access-8b7sx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.213712 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d" (OuterVolumeSpecName: "glance") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.232959 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233394 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gst6c\" (UniqueName: \"kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233439 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233471 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233520 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233544 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233598 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233619 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233715 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233733 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233822 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data\") pod \"1ea45747-c756-4447-b140-e6bc10188ec3\" (UID: \"1ea45747-c756-4447-b140-e6bc10188ec3\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.233846 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr99n\" (UniqueName: \"kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234061 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234118 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs\") pod \"bf71fa97-68bf-4b00-9072-da0445c8154b\" (UID: \"bf71fa97-68bf-4b00-9072-da0445c8154b\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234841 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") on node \"crc\" " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234919 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.234981 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/731faa0b-4d3c-4336-913d-e98fd4066184-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.235116 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.235173 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.235224 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b7sx\" (UniqueName: \"kubernetes.io/projected/731faa0b-4d3c-4336-913d-e98fd4066184-kube-api-access-8b7sx\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.238397 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts" (OuterVolumeSpecName: "scripts") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.238662 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs" (OuterVolumeSpecName: "logs") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.241652 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n" (OuterVolumeSpecName: "kube-api-access-hr99n") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "kube-api-access-hr99n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.245012 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.256407 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.269008 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c" (OuterVolumeSpecName: "kube-api-access-gst6c") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "kube-api-access-gst6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.273338 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154" (OuterVolumeSpecName: "glance") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "pvc-97d90b05-5a54-40f1-981b-562ae2bfc154". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.299849 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.301014 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts" (OuterVolumeSpecName: "scripts") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.309704 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data" (OuterVolumeSpecName: "config-data") pod "731faa0b-4d3c-4336-913d-e98fd4066184" (UID: "731faa0b-4d3c-4336-913d-e98fd4066184"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.343274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x7js\" (UniqueName: \"kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js\") pod \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.343621 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs\") pod \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.343745 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts\") pod \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.343924 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle\") pod \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.344090 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data\") pod \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\" (UID: \"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.344452 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs" (OuterVolumeSpecName: "logs") pod "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" (UID: "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.346723 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/731faa0b-4d3c-4336-913d-e98fd4066184-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.346927 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347028 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347118 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347659 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") on node \"crc\" " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347734 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr99n\" (UniqueName: \"kubernetes.io/projected/bf71fa97-68bf-4b00-9072-da0445c8154b-kube-api-access-hr99n\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347790 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347849 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf71fa97-68bf-4b00-9072-da0445c8154b-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347905 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gst6c\" (UniqueName: \"kubernetes.io/projected/1ea45747-c756-4447-b140-e6bc10188ec3-kube-api-access-gst6c\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347963 4806 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.348052 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.346899 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.347805 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts" (OuterVolumeSpecName: "scripts") pod "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" (UID: "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.349803 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js" (OuterVolumeSpecName: "kube-api-access-9x7js") pod "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" (UID: "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1"). InnerVolumeSpecName "kube-api-access-9x7js". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.360817 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.360944 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d") on node "crc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.365101 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data" (OuterVolumeSpecName: "config-data") pod "1ea45747-c756-4447-b140-e6bc10188ec3" (UID: "1ea45747-c756-4447-b140-e6bc10188ec3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.371240 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.378436 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.378604 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-97d90b05-5a54-40f1-981b-562ae2bfc154" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154") on node "crc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.378638 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data" (OuterVolumeSpecName: "config-data") pod "bf71fa97-68bf-4b00-9072-da0445c8154b" (UID: "bf71fa97-68bf-4b00-9072-da0445c8154b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.388394 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data" (OuterVolumeSpecName: "config-data") pod "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" (UID: "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.411618 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" (UID: "a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450536 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x7js\" (UniqueName: \"kubernetes.io/projected/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-kube-api-access-9x7js\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450578 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450593 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450693 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450711 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf71fa97-68bf-4b00-9072-da0445c8154b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450722 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450733 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450745 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ea45747-c756-4447-b140-e6bc10188ec3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450756 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.450768 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.553543 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654125 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654268 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654407 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhcr5\" (UniqueName: \"kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654485 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654509 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.654603 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0\") pod \"7463281f-ab54-4849-861d-045b2a1a848c\" (UID: \"7463281f-ab54-4849-861d-045b2a1a848c\") " Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.662045 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5" (OuterVolumeSpecName: "kube-api-access-rhcr5") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "kube-api-access-rhcr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.664408 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"bf71fa97-68bf-4b00-9072-da0445c8154b","Type":"ContainerDied","Data":"43efd7a8bda47c914855f96cdcdcd32f0ccfa8926d83e7a29da59e36cd05751a"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.664477 4806 scope.go:117] "RemoveContainer" containerID="34b9de7dd4109125c7adc345a8c724f2c93cdc60f4a70092d7b2b509c761412a" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.664645 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.674794 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcs94" event={"ID":"1ea45747-c756-4447-b140-e6bc10188ec3","Type":"ContainerDied","Data":"1138faf56e19a39e67f489507496efc23083b69c6124b7c016a117a0b416c70c"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.674827 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1138faf56e19a39e67f489507496efc23083b69c6124b7c016a117a0b416c70c" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.674896 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcs94" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.682675 4806 generic.go:334] "Generic (PLEG): container finished" podID="7463281f-ab54-4849-861d-045b2a1a848c" containerID="8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06" exitCode=0 Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.682930 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.682866 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" event={"ID":"7463281f-ab54-4849-861d-045b2a1a848c","Type":"ContainerDied","Data":"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.683015 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-zjmcx" event={"ID":"7463281f-ab54-4849-861d-045b2a1a848c","Type":"ContainerDied","Data":"84bae419bdf8c6d52d5d6f280eb48330a0161f2a9fb11ef5129985634fee7ae3"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.689782 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerStarted","Data":"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.699978 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-bqhxc" event={"ID":"a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1","Type":"ContainerDied","Data":"5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.700847 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a098e6d1406f661be9ed5dd7dbbeaae28df11c6f61cc2c74926594649e6f460" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.699983 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-bqhxc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.712821 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.714124 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"731faa0b-4d3c-4336-913d-e98fd4066184","Type":"ContainerDied","Data":"f3e4c7c73d07351cda4fe06f67dce1019f028b3752374b2d0da1af2e52ffa6e1"} Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.717229 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.758283 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhcr5\" (UniqueName: \"kubernetes.io/projected/7463281f-ab54-4849-861d-045b2a1a848c-kube-api-access-rhcr5\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.758382 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.765632 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.768884 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.768960 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.781352 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.791628 4806 scope.go:117] "RemoveContainer" containerID="6d2356a1e0452f7067c4abdc726be46b70eecf1c0c9429680163ed32225a78db" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.799447 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.799989 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800011 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800029 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="init" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800039 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="init" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800061 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800068 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800087 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" containerName="placement-db-sync" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800095 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" containerName="placement-db-sync" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800107 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800116 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800144 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800152 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800166 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="dnsmasq-dns" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800175 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="dnsmasq-dns" Nov 25 15:15:46 crc kubenswrapper[4806]: E1125 15:15:46.800203 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea45747-c756-4447-b140-e6bc10188ec3" containerName="keystone-bootstrap" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800211 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea45747-c756-4447-b140-e6bc10188ec3" containerName="keystone-bootstrap" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800425 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800443 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7463281f-ab54-4849-861d-045b2a1a848c" containerName="dnsmasq-dns" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800462 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800479 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea45747-c756-4447-b140-e6bc10188ec3" containerName="keystone-bootstrap" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800495 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" containerName="placement-db-sync" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800508 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" containerName="glance-log" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.800525 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" containerName="glance-httpd" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.801659 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.805285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s7t8r" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.805471 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.805604 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.818791 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.822474 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.832486 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config" (OuterVolumeSpecName: "config") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.832979 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7463281f-ab54-4849-861d-045b2a1a848c" (UID: "7463281f-ab54-4849-861d-045b2a1a848c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.833805 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.847390 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.857699 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.860585 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.868496 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.870720 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.874503 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.875258 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.875285 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.875295 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.875303 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7463281f-ab54-4849-861d-045b2a1a848c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.888446 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6c84b48b46-vlp89"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.890060 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.894192 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8486684b84-snnmc"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.895643 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.896670 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.896859 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.897002 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.897128 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.897246 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8vrnm" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902188 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nmg8l" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902610 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902681 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902798 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902823 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.902997 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.935985 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c84b48b46-vlp89"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.950936 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8486684b84-snnmc"] Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.965354 4806 scope.go:117] "RemoveContainer" containerID="8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976836 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976870 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976895 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976914 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976935 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.976957 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977240 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977294 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-internal-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrxwd\" (UniqueName: \"kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977424 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkfw4\" (UniqueName: \"kubernetes.io/projected/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-kube-api-access-mkfw4\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977461 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-scripts\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977486 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-fernet-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977582 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977609 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-public-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977628 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-config-data\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977653 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-combined-ca-bundle\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977684 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-combined-ca-bundle\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977725 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977742 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977781 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-scripts\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-public-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fac79279-6dad-4f14-8e06-4d705d8f552d-logs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977906 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977931 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-internal-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977960 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-credential-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.977992 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.978166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.978336 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.978399 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n42kt\" (UniqueName: \"kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.979327 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-config-data\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:46 crc kubenswrapper[4806]: I1125 15:15:46.983328 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4pj\" (UniqueName: \"kubernetes.io/projected/fac79279-6dad-4f14-8e06-4d705d8f552d-kube-api-access-rt4pj\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.026228 4806 scope.go:117] "RemoveContainer" containerID="45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.046131 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.054926 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-zjmcx"] Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.080585 4806 scope.go:117] "RemoveContainer" containerID="8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06" Nov 25 15:15:47 crc kubenswrapper[4806]: E1125 15:15:47.081073 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06\": container with ID starting with 8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06 not found: ID does not exist" containerID="8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.081101 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06"} err="failed to get container status \"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06\": rpc error: code = NotFound desc = could not find container \"8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06\": container with ID starting with 8392c973e8d18f1468177a6b9ac997214763d271e41a0a5fd0175e9e18464d06 not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.081122 4806 scope.go:117] "RemoveContainer" containerID="45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae" Nov 25 15:15:47 crc kubenswrapper[4806]: E1125 15:15:47.081583 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae\": container with ID starting with 45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae not found: ID does not exist" containerID="45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.081606 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae"} err="failed to get container status \"45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae\": rpc error: code = NotFound desc = could not find container \"45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae\": container with ID starting with 45973edd1134e290169ffc8244fd5ef4a5d10ffa8983b04ced0174cd9c2ebfae not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.081619 4806 scope.go:117] "RemoveContainer" containerID="86d73c4b7c6494308bce7e9cf1b963d86d0ee2f1c80bf5773bc7059ab1df230c" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088395 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088544 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088589 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n42kt\" (UniqueName: \"kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088661 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-config-data\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088719 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt4pj\" (UniqueName: \"kubernetes.io/projected/fac79279-6dad-4f14-8e06-4d705d8f552d-kube-api-access-rt4pj\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088762 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.088905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089021 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089184 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089245 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089291 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-internal-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089368 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrxwd\" (UniqueName: \"kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089468 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkfw4\" (UniqueName: \"kubernetes.io/projected/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-kube-api-access-mkfw4\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089530 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-scripts\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089557 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-fernet-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089710 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089768 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-public-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089800 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-config-data\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089853 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-combined-ca-bundle\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089869 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089890 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-combined-ca-bundle\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089948 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.089973 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-scripts\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-public-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090103 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fac79279-6dad-4f14-8e06-4d705d8f552d-logs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090163 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090180 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-internal-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-credential-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.090234 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.092487 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.095626 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.095681 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-config-data\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.099918 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fac79279-6dad-4f14-8e06-4d705d8f552d-logs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.100873 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-combined-ca-bundle\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.101582 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.101946 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.101972 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0d2c8bd947cd04e33b263736a5e66dc40906178a29bfc8a7e651131070b0df8/globalmount\"" pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.110105 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-internal-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.110457 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt4pj\" (UniqueName: \"kubernetes.io/projected/fac79279-6dad-4f14-8e06-4d705d8f552d-kube-api-access-rt4pj\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.110980 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-config-data\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.111469 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-credential-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.111735 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-public-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.112126 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.112243 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8638b1ae13d11aa578ec8268990588ab56d879a16e582695b5a3249a11d12f4b/globalmount\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.114635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-scripts\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.115123 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-fernet-keys\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.117349 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.119483 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.120195 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.120205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-combined-ca-bundle\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.121281 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-public-tls-certs\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.121346 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.133775 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n42kt\" (UniqueName: \"kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.134151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fac79279-6dad-4f14-8e06-4d705d8f552d-scripts\") pod \"placement-6c84b48b46-vlp89\" (UID: \"fac79279-6dad-4f14-8e06-4d705d8f552d\") " pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.134747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-internal-tls-certs\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.135246 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.138171 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.138808 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.143147 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkfw4\" (UniqueName: \"kubernetes.io/projected/73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5-kube-api-access-mkfw4\") pod \"keystone-8486684b84-snnmc\" (UID: \"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5\") " pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.147650 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.169780 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrxwd\" (UniqueName: \"kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.182192 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.234303 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.250632 4806 scope.go:117] "RemoveContainer" containerID="d85baef3894d5f22f8358c8e7c7e6b9c324710db0ccc8ec06687f4324d8984e9" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.251054 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.278035 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.279537 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.298949 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.321656 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.397426 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle\") pod \"08c00715-2142-4aef-ae81-16ce4c5cba4d\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.397550 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data\") pod \"08c00715-2142-4aef-ae81-16ce4c5cba4d\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.397641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmg27\" (UniqueName: \"kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27\") pod \"08c00715-2142-4aef-ae81-16ce4c5cba4d\" (UID: \"08c00715-2142-4aef-ae81-16ce4c5cba4d\") " Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.406375 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "08c00715-2142-4aef-ae81-16ce4c5cba4d" (UID: "08c00715-2142-4aef-ae81-16ce4c5cba4d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.406460 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27" (OuterVolumeSpecName: "kube-api-access-nmg27") pod "08c00715-2142-4aef-ae81-16ce4c5cba4d" (UID: "08c00715-2142-4aef-ae81-16ce4c5cba4d"). InnerVolumeSpecName "kube-api-access-nmg27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.437421 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08c00715-2142-4aef-ae81-16ce4c5cba4d" (UID: "08c00715-2142-4aef-ae81-16ce4c5cba4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.500235 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.500619 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/08c00715-2142-4aef-ae81-16ce4c5cba4d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.500651 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmg27\" (UniqueName: \"kubernetes.io/projected/08c00715-2142-4aef-ae81-16ce4c5cba4d-kube-api-access-nmg27\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.727241 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-n7cnj" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.727859 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-n7cnj" event={"ID":"08c00715-2142-4aef-ae81-16ce4c5cba4d","Type":"ContainerDied","Data":"7926d9558a9cb1051bd34810b8ec00767fe819ba02188c8fe90e280733436516"} Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.727885 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7926d9558a9cb1051bd34810b8ec00767fe819ba02188c8fe90e280733436516" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.896389 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6c84b48b46-vlp89"] Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.952681 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fc7bb5d48-xzkml"] Nov 25 15:15:47 crc kubenswrapper[4806]: E1125 15:15:47.954091 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08c00715-2142-4aef-ae81-16ce4c5cba4d" containerName="barbican-db-sync" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.954353 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="08c00715-2142-4aef-ae81-16ce4c5cba4d" containerName="barbican-db-sync" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.954632 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="08c00715-2142-4aef-ae81-16ce4c5cba4d" containerName="barbican-db-sync" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.955733 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.971452 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-trp2w" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.971772 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:15:47 crc kubenswrapper[4806]: I1125 15:15:47.971973 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.001137 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-66468c84c9-dpswk"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.004805 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.010097 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026506 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026601 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data-custom\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026652 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zln5h\" (UniqueName: \"kubernetes.io/projected/322cf975-d195-44f0-b652-909080e6c2f2-kube-api-access-zln5h\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026686 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data-custom\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026760 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cc24510-0ee6-451a-ae1e-6c057d860972-logs\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026815 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-combined-ca-bundle\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.026893 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.027002 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7274r\" (UniqueName: \"kubernetes.io/projected/9cc24510-0ee6-451a-ae1e-6c057d860972-kube-api-access-7274r\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.027017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322cf975-d195-44f0-b652-909080e6c2f2-logs\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.027090 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-combined-ca-bundle\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.068814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc7bb5d48-xzkml"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.128821 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-combined-ca-bundle\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.128875 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.128942 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7274r\" (UniqueName: \"kubernetes.io/projected/9cc24510-0ee6-451a-ae1e-6c057d860972-kube-api-access-7274r\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.128959 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322cf975-d195-44f0-b652-909080e6c2f2-logs\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-combined-ca-bundle\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data-custom\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zln5h\" (UniqueName: \"kubernetes.io/projected/322cf975-d195-44f0-b652-909080e6c2f2-kube-api-access-zln5h\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129131 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data-custom\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129183 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cc24510-0ee6-451a-ae1e-6c057d860972-logs\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.129692 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9cc24510-0ee6-451a-ae1e-6c057d860972-logs\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.145448 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-combined-ca-bundle\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.148053 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-combined-ca-bundle\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.149288 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.149498 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.151456 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/322cf975-d195-44f0-b652-909080e6c2f2-logs\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.152037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data-custom\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.157035 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data-custom\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.166110 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="731faa0b-4d3c-4336-913d-e98fd4066184" path="/var/lib/kubelet/pods/731faa0b-4d3c-4336-913d-e98fd4066184/volumes" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.167147 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7463281f-ab54-4849-861d-045b2a1a848c" path="/var/lib/kubelet/pods/7463281f-ab54-4849-861d-045b2a1a848c/volumes" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.168391 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf71fa97-68bf-4b00-9072-da0445c8154b" path="/var/lib/kubelet/pods/bf71fa97-68bf-4b00-9072-da0445c8154b/volumes" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.169298 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66468c84c9-dpswk"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.169339 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.171060 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cc24510-0ee6-451a-ae1e-6c057d860972-config-data\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.172800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zln5h\" (UniqueName: \"kubernetes.io/projected/322cf975-d195-44f0-b652-909080e6c2f2-kube-api-access-zln5h\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.174865 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/322cf975-d195-44f0-b652-909080e6c2f2-config-data\") pod \"barbican-keystone-listener-fc7bb5d48-xzkml\" (UID: \"322cf975-d195-44f0-b652-909080e6c2f2\") " pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.189048 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7274r\" (UniqueName: \"kubernetes.io/projected/9cc24510-0ee6-451a-ae1e-6c057d860972-kube-api-access-7274r\") pod \"barbican-worker-66468c84c9-dpswk\" (UID: \"9cc24510-0ee6-451a-ae1e-6c057d860972\") " pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.189136 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8486684b84-snnmc"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.229034 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.230777 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.236698 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-trp2w" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.245817 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.259379 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-66468c84c9-dpswk" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.280126 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:15:48 crc kubenswrapper[4806]: W1125 15:15:48.319785 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a56466e_77fd_43df_b5a6_234d90b66334.slice/crio-cc8c784811b31d841420dbef79f03539a9a3aa70948395363a5e6518654c0fa6 WatchSource:0}: Error finding container cc8c784811b31d841420dbef79f03539a9a3aa70948395363a5e6518654c0fa6: Status 404 returned error can't find the container with id cc8c784811b31d841420dbef79f03539a9a3aa70948395363a5e6518654c0fa6 Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.358455 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.358777 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.358889 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.358998 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.359070 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.359204 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mnwb\" (UniqueName: \"kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.368380 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.398357 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.400216 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.412718 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.419218 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469402 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469471 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469564 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mnwb\" (UniqueName: \"kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469652 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469792 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469812 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469864 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469892 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469932 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.469950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws9vm\" (UniqueName: \"kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.472858 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.473299 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.473651 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.473981 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.474409 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.505287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mnwb\" (UniqueName: \"kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb\") pod \"dnsmasq-dns-586bdc5f9-kn4bd\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.574010 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.574090 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws9vm\" (UniqueName: \"kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.574120 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.574161 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.574235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.588408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.599657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws9vm\" (UniqueName: \"kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.631445 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.635044 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.636209 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.639671 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle\") pod \"barbican-api-bc4cd6f78-4rzjr\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.750942 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.773687 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c84b48b46-vlp89" event={"ID":"fac79279-6dad-4f14-8e06-4d705d8f552d","Type":"ContainerStarted","Data":"2a0b02b48f858e3553ed00f7d61a945ba75026bab8718238bf81d390dba30bf1"} Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.803603 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerStarted","Data":"6b00c17877626d4d35056df13dad56d31c74d1317e1457240700ddf84cc0ac2c"} Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.810038 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerStarted","Data":"cc8c784811b31d841420dbef79f03539a9a3aa70948395363a5e6518654c0fa6"} Nov 25 15:15:48 crc kubenswrapper[4806]: I1125 15:15:48.815681 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8486684b84-snnmc" event={"ID":"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5","Type":"ContainerStarted","Data":"cf2f3b52fcc00f37120423bae9c5bf0498b89d4e2e8c2d50d18a4a098065b52e"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.185572 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc7bb5d48-xzkml"] Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.207871 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-66468c84c9-dpswk"] Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.512530 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.529730 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:15:49 crc kubenswrapper[4806]: W1125 15:15:49.532393 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3d0aed9_5cf7_4eb1_9df2_1c2b42a526e9.slice/crio-ec5dfd9c4d49d9530649880e20ff151d2bf49e7d5955fdda1672585faed0d66d WatchSource:0}: Error finding container ec5dfd9c4d49d9530649880e20ff151d2bf49e7d5955fdda1672585faed0d66d: Status 404 returned error can't find the container with id ec5dfd9c4d49d9530649880e20ff151d2bf49e7d5955fdda1672585faed0d66d Nov 25 15:15:49 crc kubenswrapper[4806]: W1125 15:15:49.536733 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00377b85_158d_4a45_8a3c_a65220b87590.slice/crio-5d0a24980fe9fee0fd4a5255119306da8a36c297c5170cc8aaa93738a907b6c9 WatchSource:0}: Error finding container 5d0a24980fe9fee0fd4a5255119306da8a36c297c5170cc8aaa93738a907b6c9: Status 404 returned error can't find the container with id 5d0a24980fe9fee0fd4a5255119306da8a36c297c5170cc8aaa93738a907b6c9 Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.836441 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" event={"ID":"00377b85-158d-4a45-8a3c-a65220b87590","Type":"ContainerStarted","Data":"5d0a24980fe9fee0fd4a5255119306da8a36c297c5170cc8aaa93738a907b6c9"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.838473 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" event={"ID":"322cf975-d195-44f0-b652-909080e6c2f2","Type":"ContainerStarted","Data":"4048bef4cc1b0c87f5fa12673a19ba39828a68586d286b17b2da9c1d21c8ecad"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.841127 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerStarted","Data":"ec5dfd9c4d49d9530649880e20ff151d2bf49e7d5955fdda1672585faed0d66d"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.842943 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8486684b84-snnmc" event={"ID":"73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5","Type":"ContainerStarted","Data":"cb2a9f0121923ae230c64b2f36332e5460719ae8806c88fce550c0c6d3b8b676"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.846466 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c84b48b46-vlp89" event={"ID":"fac79279-6dad-4f14-8e06-4d705d8f552d","Type":"ContainerStarted","Data":"4b7b8b1dbaef3ca477bc6b15461c389a9f27ccfbd3b0d7185e1f9dce3060ed22"} Nov 25 15:15:49 crc kubenswrapper[4806]: I1125 15:15:49.852828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66468c84c9-dpswk" event={"ID":"9cc24510-0ee6-451a-ae1e-6c057d860972","Type":"ContainerStarted","Data":"85784db0020e2e64d6d4d321cd73ebd5e55b521c09c2f8db7415110386967109"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.074683 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b5fbf57f8-jxhqp"] Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.076976 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.079676 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.086431 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.102129 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b5fbf57f8-jxhqp"] Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273169 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data-custom\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273228 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cjk\" (UniqueName: \"kubernetes.io/projected/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-kube-api-access-w8cjk\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273247 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-logs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273783 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-public-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273822 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-internal-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273952 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.273971 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-combined-ca-bundle\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.375803 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.375848 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-combined-ca-bundle\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.375891 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data-custom\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.375914 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cjk\" (UniqueName: \"kubernetes.io/projected/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-kube-api-access-w8cjk\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.377093 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-logs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.377282 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-public-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.377302 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-internal-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.381723 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-logs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.382238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data-custom\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.382655 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-combined-ca-bundle\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.382781 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-config-data\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.383070 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-internal-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.386875 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-public-tls-certs\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.399458 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cjk\" (UniqueName: \"kubernetes.io/projected/cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81-kube-api-access-w8cjk\") pod \"barbican-api-5b5fbf57f8-jxhqp\" (UID: \"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81\") " pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.698743 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.901158 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerStarted","Data":"e98a613094a0823be37da0b1e6741b26dddee757216a105b12e0ee17f23a1186"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.903540 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerStarted","Data":"18234c61d10b2a578b0e7f73ce15bc055485de86ee76ee24627bafff6d25fa84"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.908610 4806 generic.go:334] "Generic (PLEG): container finished" podID="00377b85-158d-4a45-8a3c-a65220b87590" containerID="a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476" exitCode=0 Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.908715 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" event={"ID":"00377b85-158d-4a45-8a3c-a65220b87590","Type":"ContainerDied","Data":"a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.910718 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerStarted","Data":"b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.915300 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6c84b48b46-vlp89" event={"ID":"fac79279-6dad-4f14-8e06-4d705d8f552d","Type":"ContainerStarted","Data":"31b613f382afa1649a1df91f1a30da7dc39d6c8d205b43acdc6bb596041f3c31"} Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.916080 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.918007 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.918146 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.961406 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6c84b48b46-vlp89" podStartSLOduration=5.961381696 podStartE2EDuration="5.961381696s" podCreationTimestamp="2025-11-25 15:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:51.957000072 +0000 UTC m=+1384.609142513" watchObservedRunningTime="2025-11-25 15:15:51.961381696 +0000 UTC m=+1384.613524107" Nov 25 15:15:51 crc kubenswrapper[4806]: I1125 15:15:51.981325 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-8486684b84-snnmc" podStartSLOduration=5.981288353 podStartE2EDuration="5.981288353s" podCreationTimestamp="2025-11-25 15:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:51.979133842 +0000 UTC m=+1384.631276273" watchObservedRunningTime="2025-11-25 15:15:51.981288353 +0000 UTC m=+1384.633430764" Nov 25 15:15:54 crc kubenswrapper[4806]: I1125 15:15:54.949224 4806 generic.go:334] "Generic (PLEG): container finished" podID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" containerID="bfce09d698f1f48b17a93b00e987a4e0e12f30f045ee8310782611fa29bbfac3" exitCode=0 Nov 25 15:15:54 crc kubenswrapper[4806]: I1125 15:15:54.949347 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7lfx4" event={"ID":"a2e7e600-c1a4-4bda-910b-c11fe9411cc9","Type":"ContainerDied","Data":"bfce09d698f1f48b17a93b00e987a4e0e12f30f045ee8310782611fa29bbfac3"} Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.302468 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.416770 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2h2c\" (UniqueName: \"kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417012 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417119 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417168 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417199 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417220 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id\") pod \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\" (UID: \"a2e7e600-c1a4-4bda-910b-c11fe9411cc9\") " Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.417682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.423544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts" (OuterVolumeSpecName: "scripts") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.424144 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.442061 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c" (OuterVolumeSpecName: "kube-api-access-z2h2c") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "kube-api-access-z2h2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.486723 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.492026 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data" (OuterVolumeSpecName: "config-data") pod "a2e7e600-c1a4-4bda-910b-c11fe9411cc9" (UID: "a2e7e600-c1a4-4bda-910b-c11fe9411cc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519845 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519883 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519896 4806 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519909 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519920 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: I1125 15:16:06.519930 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2h2c\" (UniqueName: \"kubernetes.io/projected/a2e7e600-c1a4-4bda-910b-c11fe9411cc9-kube-api-access-z2h2c\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:06 crc kubenswrapper[4806]: E1125 15:16:06.859930 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Nov 25 15:16:06 crc kubenswrapper[4806]: E1125 15:16:06.860556 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwkpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f1b5c22d-b872-4857-b36c-5441ed9dfc9a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:16:06 crc kubenswrapper[4806]: E1125 15:16:06.862228 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.095385 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7lfx4" event={"ID":"a2e7e600-c1a4-4bda-910b-c11fe9411cc9","Type":"ContainerDied","Data":"83130513a7ececfea63da7746ce67fe88ee9c313b8642698d7ff2e80a6e98ac4"} Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.095675 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83130513a7ececfea63da7746ce67fe88ee9c313b8642698d7ff2e80a6e98ac4" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.095500 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7lfx4" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.098026 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" event={"ID":"00377b85-158d-4a45-8a3c-a65220b87590","Type":"ContainerStarted","Data":"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963"} Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.098145 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="ceilometer-notification-agent" containerID="cri-o://6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12" gracePeriod=30 Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.098168 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="sg-core" containerID="cri-o://c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3" gracePeriod=30 Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.279224 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b5fbf57f8-jxhqp"] Nov 25 15:16:07 crc kubenswrapper[4806]: E1125 15:16:07.464549 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified" Nov 25 15:16:07 crc kubenswrapper[4806]: E1125 15:16:07.466241 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-keystone-listener-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-keystone-listener.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5f8hd6h688h687h596h65ch556h5cbh699h7chc9h55h68h544hdfhbfh654hbbhcbh54ch586h74h64fh584h5b8h65fh7h68fh649hddh668h65cq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zln5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-keystone-listener-fc7bb5d48-xzkml_openstack(322cf975-d195-44f0-b652-909080e6c2f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:16:07 crc kubenswrapper[4806]: E1125 15:16:07.507284 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" podUID="322cf975-d195-44f0-b652-909080e6c2f2" Nov 25 15:16:07 crc kubenswrapper[4806]: W1125 15:16:07.530164 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcfd9535d_9d9c_4c54_b4eb_ba393eaf2d81.slice/crio-b6798e69dd97b18548acf3324a768a48c0a02b257c106d4c9da10756dfb23d01 WatchSource:0}: Error finding container b6798e69dd97b18548acf3324a768a48c0a02b257c106d4c9da10756dfb23d01: Status 404 returned error can't find the container with id b6798e69dd97b18548acf3324a768a48c0a02b257c106d4c9da10756dfb23d01 Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.673535 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:07 crc kubenswrapper[4806]: E1125 15:16:07.674004 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" containerName="cinder-db-sync" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.674024 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" containerName="cinder-db-sync" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.674249 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" containerName="cinder-db-sync" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.675305 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.680098 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.680449 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.680628 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqsxx" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.680813 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.695509 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.783372 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.832748 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.834978 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.859007 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.873571 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.873858 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.874021 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.874178 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.874699 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:07 crc kubenswrapper[4806]: I1125 15:16:07.874890 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cgdz\" (UniqueName: \"kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.006076 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdw6\" (UniqueName: \"kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.006788 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.006830 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.006852 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.006903 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.008918 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012547 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cgdz\" (UniqueName: \"kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012629 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012880 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.012928 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.022753 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.022815 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.027931 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.030366 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.038747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.051541 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cgdz\" (UniqueName: \"kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.052428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.052878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.055901 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137344 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137394 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137457 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137507 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137527 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137586 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137616 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkvln\" (UniqueName: \"kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137646 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdw6\" (UniqueName: \"kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137669 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137694 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.137766 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.138728 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.139258 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.139842 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.143236 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.143684 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.157577 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.157610 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerStarted","Data":"8cf283dc14763b552a799ea513e1f4146ba5c46d2643284e97c7bca12f49f737"} Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.158429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerStarted","Data":"630a12fc3addd6852a2a6a136c9247c411b8f766277d959736458ba87a3d8f2d"} Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.158484 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.159222 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.182622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdw6\" (UniqueName: \"kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6\") pod \"dnsmasq-dns-795f4db4bc-9vs9k\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.185343 4806 generic.go:334] "Generic (PLEG): container finished" podID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerID="c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3" exitCode=2 Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.185463 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerDied","Data":"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3"} Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.210950 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" event={"ID":"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81","Type":"ContainerStarted","Data":"b6798e69dd97b18548acf3324a768a48c0a02b257c106d4c9da10756dfb23d01"} Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.217711 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerStarted","Data":"4d36056d5a652030f0de6da870de00f5050e9b3e3e536651a9e06fe84ed3ce6f"} Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.217938 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:16:08 crc kubenswrapper[4806]: E1125 15:16:08.224592 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-keystone-listener-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-keystone-listener\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified\\\"\"]" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" podUID="322cf975-d195-44f0-b652-909080e6c2f2" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.225440 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=22.225412322 podStartE2EDuration="22.225412322s" podCreationTimestamp="2025-11-25 15:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:08.204849227 +0000 UTC m=+1400.856991638" watchObservedRunningTime="2025-11-25 15:16:08.225412322 +0000 UTC m=+1400.877554733" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.235218 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podStartSLOduration=20.235201141 podStartE2EDuration="20.235201141s" podCreationTimestamp="2025-11-25 15:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:08.22498155 +0000 UTC m=+1400.877123961" watchObservedRunningTime="2025-11-25 15:16:08.235201141 +0000 UTC m=+1400.887343552" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.240644 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.240839 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.240861 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.240897 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.241029 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkvln\" (UniqueName: \"kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.241678 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.243071 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.243350 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.243658 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.251701 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.251800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.253134 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" podStartSLOduration=20.253117451 podStartE2EDuration="20.253117451s" podCreationTimestamp="2025-11-25 15:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:08.252079912 +0000 UTC m=+1400.904222313" watchObservedRunningTime="2025-11-25 15:16:08.253117451 +0000 UTC m=+1400.905259862" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.253342 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.253799 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.264058 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkvln\" (UniqueName: \"kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln\") pod \"cinder-api-0\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.303207 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.324290 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=22.324269918 podStartE2EDuration="22.324269918s" podCreationTimestamp="2025-11-25 15:15:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:08.294653225 +0000 UTC m=+1400.946795636" watchObservedRunningTime="2025-11-25 15:16:08.324269918 +0000 UTC m=+1400.976412329" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.468969 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:08 crc kubenswrapper[4806]: I1125 15:16:08.471230 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:08 crc kubenswrapper[4806]: E1125 15:16:08.900126 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified" Nov 25 15:16:08 crc kubenswrapper[4806]: E1125 15:16:08.900246 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-worker-log,Image:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,Command:[/usr/bin/dumb-init],Args:[--single-child -- /usr/bin/tail -n+1 -F /var/log/barbican/barbican-worker.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n589h66h97h64bh549h647h9ch64fh5d9h67bh555h5f6h585h577h556h586hdbh5f6hb8hc5h646h5fh55dh9bh565h55h65bh9dh89h68fh5d7h5dq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/barbican,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7274r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-worker-66468c84c9-dpswk_openstack(9cc24510-0ee6-451a-ae1e-6c057d860972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:16:08 crc kubenswrapper[4806]: E1125 15:16:08.908779 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-66468c84c9-dpswk" podUID="9cc24510-0ee6-451a-ae1e-6c057d860972" Nov 25 15:16:09 crc kubenswrapper[4806]: I1125 15:16:09.262444 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="dnsmasq-dns" containerID="cri-o://cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963" gracePeriod=10 Nov 25 15:16:09 crc kubenswrapper[4806]: E1125 15:16:09.266513 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"barbican-worker-log\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\", failed to \"StartContainer\" for \"barbican-worker\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified\\\"\"]" pod="openstack/barbican-worker-66468c84c9-dpswk" podUID="9cc24510-0ee6-451a-ae1e-6c057d860972" Nov 25 15:16:09 crc kubenswrapper[4806]: I1125 15:16:09.614393 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:09 crc kubenswrapper[4806]: I1125 15:16:09.775363 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:09 crc kubenswrapper[4806]: I1125 15:16:09.820080 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:09 crc kubenswrapper[4806]: I1125 15:16:09.979740 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.101927 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.102087 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mnwb\" (UniqueName: \"kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.102369 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.102786 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.103348 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.103476 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc\") pod \"00377b85-158d-4a45-8a3c-a65220b87590\" (UID: \"00377b85-158d-4a45-8a3c-a65220b87590\") " Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.108709 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb" (OuterVolumeSpecName: "kube-api-access-5mnwb") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "kube-api-access-5mnwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.210479 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mnwb\" (UniqueName: \"kubernetes.io/projected/00377b85-158d-4a45-8a3c-a65220b87590-kube-api-access-5mnwb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.232488 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config" (OuterVolumeSpecName: "config") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.285889 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.318328 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.318392 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.324210 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.340624 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" event={"ID":"28955e10-67f1-4268-b7e2-e7851398b376","Type":"ContainerStarted","Data":"507fe005c099989eecddda0a42f8e04e12984c267526b9b625b2dde07b33d251"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.344968 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.347049 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerStarted","Data":"88009ee120d1188a07ecef63fe7d727e1f0121abdd214f21b63e20a1d597fa53"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.374220 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerStarted","Data":"5f67dba3330cb32c963e8d25d264467737c137b5662706ea0733cea589092e5e"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.384182 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.398708 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" event={"ID":"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81","Type":"ContainerStarted","Data":"fd7fe4b9646a4a535d728e244992f08dc08175c5aeca295ac08ea3aeb2a790c3"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.403808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "00377b85-158d-4a45-8a3c-a65220b87590" (UID: "00377b85-158d-4a45-8a3c-a65220b87590"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.420895 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.420948 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.420965 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/00377b85-158d-4a45-8a3c-a65220b87590-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.441440 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-drlb4" event={"ID":"c2503ad9-21ed-44c9-ae5a-25307c751865","Type":"ContainerStarted","Data":"5398fc780dd3f6e0342d1fa9cf2d3a259707ea0309bf1888b0e68c8e77508657"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.452429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77qk4" event={"ID":"19d636cf-e82d-48c3-82db-321f0505c5ab","Type":"ContainerStarted","Data":"036d0fe09399596343085adc219f015175ed3e03c14f3699593091f9a38e0f68"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.471917 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-drlb4" podStartSLOduration=3.653836283 podStartE2EDuration="1m17.471874304s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="2025-11-25 15:14:55.218879208 +0000 UTC m=+1327.871021619" lastFinishedPulling="2025-11-25 15:16:09.036917229 +0000 UTC m=+1401.689059640" observedRunningTime="2025-11-25 15:16:10.4647158 +0000 UTC m=+1403.116858211" watchObservedRunningTime="2025-11-25 15:16:10.471874304 +0000 UTC m=+1403.124016705" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.498689 4806 generic.go:334] "Generic (PLEG): container finished" podID="00377b85-158d-4a45-8a3c-a65220b87590" containerID="cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963" exitCode=0 Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.498802 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.499657 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.501435 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" event={"ID":"00377b85-158d-4a45-8a3c-a65220b87590","Type":"ContainerDied","Data":"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.501504 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586bdc5f9-kn4bd" event={"ID":"00377b85-158d-4a45-8a3c-a65220b87590","Type":"ContainerDied","Data":"5d0a24980fe9fee0fd4a5255119306da8a36c297c5170cc8aaa93738a907b6c9"} Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.501525 4806 scope.go:117] "RemoveContainer" containerID="cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.571523 4806 scope.go:117] "RemoveContainer" containerID="a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.641776 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.651199 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586bdc5f9-kn4bd"] Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.723496 4806 scope.go:117] "RemoveContainer" containerID="cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963" Nov 25 15:16:10 crc kubenswrapper[4806]: E1125 15:16:10.724238 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963\": container with ID starting with cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963 not found: ID does not exist" containerID="cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.724281 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963"} err="failed to get container status \"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963\": rpc error: code = NotFound desc = could not find container \"cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963\": container with ID starting with cd26640e2709edf969f55a21b9b6794245b218a6809128dbfa435acb81419963 not found: ID does not exist" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.724333 4806 scope.go:117] "RemoveContainer" containerID="a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476" Nov 25 15:16:10 crc kubenswrapper[4806]: E1125 15:16:10.724768 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476\": container with ID starting with a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476 not found: ID does not exist" containerID="a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476" Nov 25 15:16:10 crc kubenswrapper[4806]: I1125 15:16:10.724827 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476"} err="failed to get container status \"a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476\": rpc error: code = NotFound desc = could not find container \"a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476\": container with ID starting with a9668425e75d3667cdfb237e388c1d254051e3b4cdc3e4b2f126b14da05b8476 not found: ID does not exist" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.388231 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555260 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555378 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555418 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555438 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555551 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555575 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwkpf\" (UniqueName: \"kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.555658 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd\") pod \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\" (UID: \"f1b5c22d-b872-4857-b36c-5441ed9dfc9a\") " Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.556491 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.556722 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.560363 4806 generic.go:334] "Generic (PLEG): container finished" podID="28955e10-67f1-4268-b7e2-e7851398b376" containerID="f5f05655b9c7f0f9914024f50ae178f561dfc97bd107f899e0b6eb76e725f492" exitCode=0 Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.560698 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" event={"ID":"28955e10-67f1-4268-b7e2-e7851398b376","Type":"ContainerDied","Data":"f5f05655b9c7f0f9914024f50ae178f561dfc97bd107f899e0b6eb76e725f492"} Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605038 4806 generic.go:334] "Generic (PLEG): container finished" podID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerID="6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12" exitCode=0 Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605132 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerDied","Data":"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12"} Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f1b5c22d-b872-4857-b36c-5441ed9dfc9a","Type":"ContainerDied","Data":"45738b562eba55c1fd17715c5bccb9dae6c74b8c79d040ee3674498a4ae18e94"} Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605185 4806 scope.go:117] "RemoveContainer" containerID="c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605330 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.605948 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts" (OuterVolumeSpecName: "scripts") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.606114 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf" (OuterVolumeSpecName: "kube-api-access-gwkpf") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "kube-api-access-gwkpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.655883 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" event={"ID":"cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81","Type":"ContainerStarted","Data":"7a13107acaefcb28384f3b4fb1bd61da6e7d3a517d23ee2a1856c57ee7e5dc1d"} Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.656398 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.656484 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.658283 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.658331 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.658340 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwkpf\" (UniqueName: \"kubernetes.io/projected/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-kube-api-access-gwkpf\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.658351 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.692722 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.727311 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" podStartSLOduration=20.727292484 podStartE2EDuration="20.727292484s" podCreationTimestamp="2025-11-25 15:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:11.714704466 +0000 UTC m=+1404.366846877" watchObservedRunningTime="2025-11-25 15:16:11.727292484 +0000 UTC m=+1404.379434915" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.730195 4806 generic.go:334] "Generic (PLEG): container finished" podID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerID="036d0fe09399596343085adc219f015175ed3e03c14f3699593091f9a38e0f68" exitCode=0 Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.730282 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77qk4" event={"ID":"19d636cf-e82d-48c3-82db-321f0505c5ab","Type":"ContainerDied","Data":"036d0fe09399596343085adc219f015175ed3e03c14f3699593091f9a38e0f68"} Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.730413 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.768915 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.768973 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.776963 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data" (OuterVolumeSpecName: "config-data") pod "f1b5c22d-b872-4857-b36c-5441ed9dfc9a" (UID: "f1b5c22d-b872-4857-b36c-5441ed9dfc9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.871264 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b5c22d-b872-4857-b36c-5441ed9dfc9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.887163 4806 scope.go:117] "RemoveContainer" containerID="6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.921426 4806 scope.go:117] "RemoveContainer" containerID="c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3" Nov 25 15:16:11 crc kubenswrapper[4806]: E1125 15:16:11.922251 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3\": container with ID starting with c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3 not found: ID does not exist" containerID="c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.922292 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3"} err="failed to get container status \"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3\": rpc error: code = NotFound desc = could not find container \"c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3\": container with ID starting with c926f6fe5e0e6cc9b7baef017a97ed469036f928d0dd588ee8fd9c61cc2e06b3 not found: ID does not exist" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.922373 4806 scope.go:117] "RemoveContainer" containerID="6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12" Nov 25 15:16:11 crc kubenswrapper[4806]: E1125 15:16:11.923120 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12\": container with ID starting with 6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12 not found: ID does not exist" containerID="6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.923144 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12"} err="failed to get container status \"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12\": rpc error: code = NotFound desc = could not find container \"6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12\": container with ID starting with 6bc22bdc8714fe00d1e4b0adedfff908e33bdf440de871cfe7e9e5d59d0fbf12 not found: ID does not exist" Nov 25 15:16:11 crc kubenswrapper[4806]: I1125 15:16:11.990003 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.005526 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.013339 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:12 crc kubenswrapper[4806]: E1125 15:16:12.018392 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="dnsmasq-dns" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018425 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="dnsmasq-dns" Nov 25 15:16:12 crc kubenswrapper[4806]: E1125 15:16:12.018443 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="sg-core" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018449 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="sg-core" Nov 25 15:16:12 crc kubenswrapper[4806]: E1125 15:16:12.018464 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="ceilometer-notification-agent" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018470 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="ceilometer-notification-agent" Nov 25 15:16:12 crc kubenswrapper[4806]: E1125 15:16:12.018505 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="init" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018512 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="init" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018722 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="ceilometer-notification-agent" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018733 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="00377b85-158d-4a45-8a3c-a65220b87590" containerName="dnsmasq-dns" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.018751 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" containerName="sg-core" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.023165 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.026299 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.026828 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.030618 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.103301 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00377b85-158d-4a45-8a3c-a65220b87590" path="/var/lib/kubelet/pods/00377b85-158d-4a45-8a3c-a65220b87590/volumes" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.104181 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b5c22d-b872-4857-b36c-5441ed9dfc9a" path="/var/lib/kubelet/pods/f1b5c22d-b872-4857-b36c-5441ed9dfc9a/volumes" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.178512 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.178606 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.178881 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfpvh\" (UniqueName: \"kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.178925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.180521 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.180629 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.185115 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.287705 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfpvh\" (UniqueName: \"kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.287778 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.287931 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.287983 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.288070 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.288123 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.288151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.288786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.288876 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.313675 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfpvh\" (UniqueName: \"kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.314997 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.316466 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.316913 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.317949 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.354769 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.751082 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" event={"ID":"28955e10-67f1-4268-b7e2-e7851398b376","Type":"ContainerStarted","Data":"994a7cfbb166f45f1789535926040258d123ff9f94f3cf83dd997b625595cf04"} Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.751453 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.755351 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerStarted","Data":"c7440e2a3f5cbc5ee7086d53893d1ecd6d574b0412cda4d21d90e9a6189bb247"} Nov 25 15:16:12 crc kubenswrapper[4806]: I1125 15:16:12.776226 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" podStartSLOduration=5.776209003 podStartE2EDuration="5.776209003s" podCreationTimestamp="2025-11-25 15:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:12.774117744 +0000 UTC m=+1405.426260165" watchObservedRunningTime="2025-11-25 15:16:12.776209003 +0000 UTC m=+1405.428351414" Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.204831 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.772510 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerStarted","Data":"2ff08984d7df0764702e490afa8a28fd0260df005b2412079fe3bfcde856adde"} Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.773270 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api-log" containerID="cri-o://c7440e2a3f5cbc5ee7086d53893d1ecd6d574b0412cda4d21d90e9a6189bb247" gracePeriod=30 Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.773390 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.773792 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api" containerID="cri-o://2ff08984d7df0764702e490afa8a28fd0260df005b2412079fe3bfcde856adde" gracePeriod=30 Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.773918 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.777780 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerStarted","Data":"fde047e84d665fe861a31df0efc49aa1e5b441be8237f0bbfdd2bab3a97bfb2c"} Nov 25 15:16:13 crc kubenswrapper[4806]: I1125 15:16:13.802057 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.802033985 podStartE2EDuration="6.802033985s" podCreationTimestamp="2025-11-25 15:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:13.792256836 +0000 UTC m=+1406.444399257" watchObservedRunningTime="2025-11-25 15:16:13.802033985 +0000 UTC m=+1406.454176396" Nov 25 15:16:14 crc kubenswrapper[4806]: I1125 15:16:14.791177 4806 generic.go:334] "Generic (PLEG): container finished" podID="90d43da5-3940-4611-a464-7347afad3a44" containerID="2ff08984d7df0764702e490afa8a28fd0260df005b2412079fe3bfcde856adde" exitCode=0 Nov 25 15:16:14 crc kubenswrapper[4806]: I1125 15:16:14.791503 4806 generic.go:334] "Generic (PLEG): container finished" podID="90d43da5-3940-4611-a464-7347afad3a44" containerID="c7440e2a3f5cbc5ee7086d53893d1ecd6d574b0412cda4d21d90e9a6189bb247" exitCode=143 Nov 25 15:16:14 crc kubenswrapper[4806]: I1125 15:16:14.791366 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerDied","Data":"2ff08984d7df0764702e490afa8a28fd0260df005b2412079fe3bfcde856adde"} Nov 25 15:16:14 crc kubenswrapper[4806]: I1125 15:16:14.791539 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerDied","Data":"c7440e2a3f5cbc5ee7086d53893d1ecd6d574b0412cda4d21d90e9a6189bb247"} Nov 25 15:16:14 crc kubenswrapper[4806]: I1125 15:16:14.793777 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.192706 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316574 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316657 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316692 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316775 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316844 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316928 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkvln\" (UniqueName: \"kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.316981 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.317039 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs\") pod \"90d43da5-3940-4611-a464-7347afad3a44\" (UID: \"90d43da5-3940-4611-a464-7347afad3a44\") " Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.317894 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs" (OuterVolumeSpecName: "logs") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.318480 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90d43da5-3940-4611-a464-7347afad3a44-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.318499 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/90d43da5-3940-4611-a464-7347afad3a44-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.323797 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts" (OuterVolumeSpecName: "scripts") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.324163 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln" (OuterVolumeSpecName: "kube-api-access-jkvln") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "kube-api-access-jkvln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.337053 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.348628 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.380623 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data" (OuterVolumeSpecName: "config-data") pod "90d43da5-3940-4611-a464-7347afad3a44" (UID: "90d43da5-3940-4611-a464-7347afad3a44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.421569 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.421843 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.421900 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.421955 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkvln\" (UniqueName: \"kubernetes.io/projected/90d43da5-3940-4611-a464-7347afad3a44-kube-api-access-jkvln\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.422012 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/90d43da5-3940-4611-a464-7347afad3a44-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.804802 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"90d43da5-3940-4611-a464-7347afad3a44","Type":"ContainerDied","Data":"5f67dba3330cb32c963e8d25d264467737c137b5662706ea0733cea589092e5e"} Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.804857 4806 scope.go:117] "RemoveContainer" containerID="2ff08984d7df0764702e490afa8a28fd0260df005b2412079fe3bfcde856adde" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.805022 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.845119 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.848390 4806 scope.go:117] "RemoveContainer" containerID="c7440e2a3f5cbc5ee7086d53893d1ecd6d574b0412cda4d21d90e9a6189bb247" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.861367 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.871084 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:15 crc kubenswrapper[4806]: E1125 15:16:15.872564 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api-log" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.872587 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api-log" Nov 25 15:16:15 crc kubenswrapper[4806]: E1125 15:16:15.872661 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.872673 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.872919 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.872941 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="90d43da5-3940-4611-a464-7347afad3a44" containerName="cinder-api-log" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.874349 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.877083 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.877593 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.877833 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.890234 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.931873 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-scripts\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.931927 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d875dfe1-f943-4577-afd4-e301920efac6-logs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.931973 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932013 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d875dfe1-f943-4577-afd4-e301920efac6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932037 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932059 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932081 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjfrg\" (UniqueName: \"kubernetes.io/projected/d875dfe1-f943-4577-afd4-e301920efac6-kube-api-access-sjfrg\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932189 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:15 crc kubenswrapper[4806]: I1125 15:16:15.932221 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.036905 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d875dfe1-f943-4577-afd4-e301920efac6-logs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.036961 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.036996 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d875dfe1-f943-4577-afd4-e301920efac6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037015 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037034 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037062 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjfrg\" (UniqueName: \"kubernetes.io/projected/d875dfe1-f943-4577-afd4-e301920efac6-kube-api-access-sjfrg\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037146 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037169 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037249 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-scripts\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.037661 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d875dfe1-f943-4577-afd4-e301920efac6-logs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.038696 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d875dfe1-f943-4577-afd4-e301920efac6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.043210 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-scripts\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.043745 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data-custom\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.044129 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.044240 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.059014 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-public-tls-certs\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.060853 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d875dfe1-f943-4577-afd4-e301920efac6-config-data\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.062727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjfrg\" (UniqueName: \"kubernetes.io/projected/d875dfe1-f943-4577-afd4-e301920efac6-kube-api-access-sjfrg\") pod \"cinder-api-0\" (UID: \"d875dfe1-f943-4577-afd4-e301920efac6\") " pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.102856 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90d43da5-3940-4611-a464-7347afad3a44" path="/var/lib/kubelet/pods/90d43da5-3940-4611-a464-7347afad3a44/volumes" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.196586 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.745026 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:16:16 crc kubenswrapper[4806]: W1125 15:16:16.750850 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd875dfe1_f943_4577_afd4_e301920efac6.slice/crio-ae0e90ae51d883bc0c9303bc60510c3a25a0f4b224985d65838850d3dff65dff WatchSource:0}: Error finding container ae0e90ae51d883bc0c9303bc60510c3a25a0f4b224985d65838850d3dff65dff: Status 404 returned error can't find the container with id ae0e90ae51d883bc0c9303bc60510c3a25a0f4b224985d65838850d3dff65dff Nov 25 15:16:16 crc kubenswrapper[4806]: I1125 15:16:16.819039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d875dfe1-f943-4577-afd4-e301920efac6","Type":"ContainerStarted","Data":"ae0e90ae51d883bc0c9303bc60510c3a25a0f4b224985d65838850d3dff65dff"} Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.251788 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.251836 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.252301 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.252343 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.335293 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.335357 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.335376 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.335389 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.523578 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.523730 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.526623 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.530130 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:16:17 crc kubenswrapper[4806]: I1125 15:16:17.792564 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.246522 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.472646 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.558630 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.558927 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="dnsmasq-dns" containerID="cri-o://1b8587ca085823ad4e934da0be772c6b21961e0f5d192551cd2c84f53e600fdc" gracePeriod=10 Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.793728 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:18 crc kubenswrapper[4806]: I1125 15:16:18.843308 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d875dfe1-f943-4577-afd4-e301920efac6","Type":"ContainerStarted","Data":"a9b33cb7fe6f449abe20648d8b1e1b24852438a447a25cc6240fec29bf31b95d"} Nov 25 15:16:19 crc kubenswrapper[4806]: I1125 15:16:19.835610 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:19 crc kubenswrapper[4806]: I1125 15:16:19.858886 4806 generic.go:334] "Generic (PLEG): container finished" podID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerID="1b8587ca085823ad4e934da0be772c6b21961e0f5d192551cd2c84f53e600fdc" exitCode=0 Nov 25 15:16:19 crc kubenswrapper[4806]: I1125 15:16:19.858926 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" event={"ID":"f2488169-196d-4613-aa80-ab2e7a49bfa9","Type":"ContainerDied","Data":"1b8587ca085823ad4e934da0be772c6b21961e0f5d192551cd2c84f53e600fdc"} Nov 25 15:16:19 crc kubenswrapper[4806]: I1125 15:16:19.948675 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:19 crc kubenswrapper[4806]: I1125 15:16:19.948796 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.157807 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.714522 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" podUID="cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.175:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.714525 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" podUID="cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.175:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.867406 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.874393 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b5fbf57f8-jxhqp" Nov 25 15:16:20 crc kubenswrapper[4806]: I1125 15:16:20.970550 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.193604 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.355711 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.356022 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.356188 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.356267 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.356337 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkw6x\" (UniqueName: \"kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.356757 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb\") pod \"f2488169-196d-4613-aa80-ab2e7a49bfa9\" (UID: \"f2488169-196d-4613-aa80-ab2e7a49bfa9\") " Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.411875 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x" (OuterVolumeSpecName: "kube-api-access-hkw6x") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "kube-api-access-hkw6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.458851 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkw6x\" (UniqueName: \"kubernetes.io/projected/f2488169-196d-4613-aa80-ab2e7a49bfa9-kube-api-access-hkw6x\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.491704 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.494368 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config" (OuterVolumeSpecName: "config") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.507885 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.510074 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.531850 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.543742 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f2488169-196d-4613-aa80-ab2e7a49bfa9" (UID: "f2488169-196d-4613-aa80-ab2e7a49bfa9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.561839 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.561883 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.561905 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.561919 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.561932 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f2488169-196d-4613-aa80-ab2e7a49bfa9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.885906 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerStarted","Data":"03e8b66fdf8d9d452e1e616471c23a4846207170a6bb46c424d699c2d94f5406"} Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.889475 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" event={"ID":"f2488169-196d-4613-aa80-ab2e7a49bfa9","Type":"ContainerDied","Data":"8fd30357f368712095328ee6d80738fae74ee1ddcdd92120dac4a4727d1f83c9"} Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.889523 4806 scope.go:117] "RemoveContainer" containerID="1b8587ca085823ad4e934da0be772c6b21961e0f5d192551cd2c84f53e600fdc" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.889637 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.934273 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" containerID="cri-o://b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d" gracePeriod=30 Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.935253 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-77qk4" event={"ID":"19d636cf-e82d-48c3-82db-321f0505c5ab","Type":"ContainerStarted","Data":"29231720f0d17eff09962941449e5463bbb10ebaa7ab48031b8085273f4d7515"} Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.935816 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" containerID="cri-o://630a12fc3addd6852a2a6a136c9247c411b8f766277d959736458ba87a3d8f2d" gracePeriod=30 Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.958460 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:16:21 crc kubenswrapper[4806]: I1125 15:16:21.987354 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-klt6q"] Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.019059 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-77qk4" podStartSLOduration=3.161629586 podStartE2EDuration="43.019039308s" podCreationTimestamp="2025-11-25 15:15:39 +0000 UTC" firstStartedPulling="2025-11-25 15:15:41.535065179 +0000 UTC m=+1374.187207590" lastFinishedPulling="2025-11-25 15:16:21.392474891 +0000 UTC m=+1414.044617312" observedRunningTime="2025-11-25 15:16:21.962647042 +0000 UTC m=+1414.614789463" watchObservedRunningTime="2025-11-25 15:16:22.019039308 +0000 UTC m=+1414.671181719" Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.118261 4806 scope.go:117] "RemoveContainer" containerID="54dc37b5a2461fa63e90a1be9c0a479604e3351f13feee0880f676c2b6e42bdb" Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.191611 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" path="/var/lib/kubelet/pods/f2488169-196d-4613-aa80-ab2e7a49bfa9/volumes" Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.280810 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6c84b48b46-vlp89" Nov 25 15:16:22 crc kubenswrapper[4806]: E1125 15:16:22.398245 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3d0aed9_5cf7_4eb1_9df2_1c2b42a526e9.slice/crio-b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2488169_196d_4613_aa80_ab2e7a49bfa9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2488169_196d_4613_aa80_ab2e7a49bfa9.slice/crio-8fd30357f368712095328ee6d80738fae74ee1ddcdd92120dac4a4727d1f83c9\": RecentStats: unable to find data in memory cache]" Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.494679 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-8486684b84-snnmc" Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.976141 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" event={"ID":"322cf975-d195-44f0-b652-909080e6c2f2","Type":"ContainerStarted","Data":"cb2fe2c5ded2a195d33238bcc15925f42e3db15f9a201e1bcfdeb47b35efca39"} Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.993043 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d875dfe1-f943-4577-afd4-e301920efac6","Type":"ContainerStarted","Data":"6eae600083e07a513a3015641e49e182f7d2b3bebae5da56293bbdba877eeeb6"} Nov 25 15:16:22 crc kubenswrapper[4806]: I1125 15:16:22.994304 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.011669 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerStarted","Data":"c08ba412b5d4d33ac6ee7c89d112c6de84041ad33172d269b029b4c8fd2bd177"} Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.015070 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.01505425 podStartE2EDuration="8.01505425s" podCreationTimestamp="2025-11-25 15:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:23.011940952 +0000 UTC m=+1415.664083383" watchObservedRunningTime="2025-11-25 15:16:23.01505425 +0000 UTC m=+1415.667196661" Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.030669 4806 generic.go:334] "Generic (PLEG): container finished" podID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerID="b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d" exitCode=143 Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.030786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerDied","Data":"b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d"} Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.056906 4806 generic.go:334] "Generic (PLEG): container finished" podID="11aeb498-3614-4aac-a381-9bf0392cf5dc" containerID="75770c80babeeaf1288bbb487b06acbdab84838b6b68416b9d71444427565ed5" exitCode=0 Nov 25 15:16:23 crc kubenswrapper[4806]: I1125 15:16:23.057517 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2nbxh" event={"ID":"11aeb498-3614-4aac-a381-9bf0392cf5dc","Type":"ContainerDied","Data":"75770c80babeeaf1288bbb487b06acbdab84838b6b68416b9d71444427565ed5"} Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.088420 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerStarted","Data":"c1e4c54e37651b2c1357e47818fd8913f1f44f7dcb8d652d14ffb66ea69f813f"} Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.113381 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" event={"ID":"322cf975-d195-44f0-b652-909080e6c2f2","Type":"ContainerStarted","Data":"87b5acb49e9708ffd4a6913836f9fe972c747a9b4042a6cad4d25f579501ca56"} Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.120003 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerStarted","Data":"41056375e94d63baab11e0d758ce2ed64f7dcbea88b2ce4184f26769e583997e"} Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.128516 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.47068529 podStartE2EDuration="17.128497697s" podCreationTimestamp="2025-11-25 15:16:07 +0000 UTC" firstStartedPulling="2025-11-25 15:16:09.630561309 +0000 UTC m=+1402.282703720" lastFinishedPulling="2025-11-25 15:16:21.288373726 +0000 UTC m=+1413.940516127" observedRunningTime="2025-11-25 15:16:24.121817887 +0000 UTC m=+1416.773960308" watchObservedRunningTime="2025-11-25 15:16:24.128497697 +0000 UTC m=+1416.780640108" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.175195 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fc7bb5d48-xzkml" podStartSLOduration=4.656724968 podStartE2EDuration="37.175176437s" podCreationTimestamp="2025-11-25 15:15:47 +0000 UTC" firstStartedPulling="2025-11-25 15:15:49.289345483 +0000 UTC m=+1381.941487894" lastFinishedPulling="2025-11-25 15:16:21.807796952 +0000 UTC m=+1414.459939363" observedRunningTime="2025-11-25 15:16:24.167261362 +0000 UTC m=+1416.819403773" watchObservedRunningTime="2025-11-25 15:16:24.175176437 +0000 UTC m=+1416.827318848" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.227182 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 15:16:24 crc kubenswrapper[4806]: E1125 15:16:24.227680 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="dnsmasq-dns" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.227709 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="dnsmasq-dns" Nov 25 15:16:24 crc kubenswrapper[4806]: E1125 15:16:24.227746 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="init" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.227764 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="init" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.227975 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="dnsmasq-dns" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.228837 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.236150 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ql5mb" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.236195 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.236342 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.254373 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.380096 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.380424 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcf7s\" (UniqueName: \"kubernetes.io/projected/3e62db5f-8827-474f-9dc5-654aaa347996-kube-api-access-fcf7s\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.380477 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.380516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.482516 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.482734 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.482775 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcf7s\" (UniqueName: \"kubernetes.io/projected/3e62db5f-8827-474f-9dc5-654aaa347996-kube-api-access-fcf7s\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.482830 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.483433 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.489457 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-combined-ca-bundle\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.498763 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/3e62db5f-8827-474f-9dc5-654aaa347996-openstack-config-secret\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.535099 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcf7s\" (UniqueName: \"kubernetes.io/projected/3e62db5f-8827-474f-9dc5-654aaa347996-kube-api-access-fcf7s\") pod \"openstackclient\" (UID: \"3e62db5f-8827-474f-9dc5-654aaa347996\") " pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.575827 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.805100 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.915176 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle\") pod \"11aeb498-3614-4aac-a381-9bf0392cf5dc\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.915281 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj58b\" (UniqueName: \"kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b\") pod \"11aeb498-3614-4aac-a381-9bf0392cf5dc\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.915471 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config\") pod \"11aeb498-3614-4aac-a381-9bf0392cf5dc\" (UID: \"11aeb498-3614-4aac-a381-9bf0392cf5dc\") " Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.923134 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b" (OuterVolumeSpecName: "kube-api-access-dj58b") pod "11aeb498-3614-4aac-a381-9bf0392cf5dc" (UID: "11aeb498-3614-4aac-a381-9bf0392cf5dc"). InnerVolumeSpecName "kube-api-access-dj58b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.969916 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11aeb498-3614-4aac-a381-9bf0392cf5dc" (UID: "11aeb498-3614-4aac-a381-9bf0392cf5dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:24 crc kubenswrapper[4806]: I1125 15:16:24.989512 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config" (OuterVolumeSpecName: "config") pod "11aeb498-3614-4aac-a381-9bf0392cf5dc" (UID: "11aeb498-3614-4aac-a381-9bf0392cf5dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.019197 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.019234 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11aeb498-3614-4aac-a381-9bf0392cf5dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.019249 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj58b\" (UniqueName: \"kubernetes.io/projected/11aeb498-3614-4aac-a381-9bf0392cf5dc-kube-api-access-dj58b\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.178221 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.195114 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerStarted","Data":"ca0c5fc3f594273c5fb2061ac15bf76d4f4205f801d64dbef1312cfff9416555"} Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.210857 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2nbxh" event={"ID":"11aeb498-3614-4aac-a381-9bf0392cf5dc","Type":"ContainerDied","Data":"32c7c20aa28fc9ac181486c9b4208af5c2d88463e98816af4b283b5a9ce19b53"} Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.210925 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32c7c20aa28fc9ac181486c9b4208af5c2d88463e98816af4b283b5a9ce19b53" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.210998 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2nbxh" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.363383 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:25 crc kubenswrapper[4806]: E1125 15:16:25.363835 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11aeb498-3614-4aac-a381-9bf0392cf5dc" containerName="neutron-db-sync" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.363848 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="11aeb498-3614-4aac-a381-9bf0392cf5dc" containerName="neutron-db-sync" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.364044 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="11aeb498-3614-4aac-a381-9bf0392cf5dc" containerName="neutron-db-sync" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.378556 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.380427 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.464093 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.472691 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.482606 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.482828 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.483008 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.483069 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.486342 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6cfdz" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.534807 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.534876 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.534922 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.534943 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.535017 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.535061 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfjp\" (UniqueName: \"kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637229 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637297 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b8t9\" (UniqueName: \"kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637364 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637387 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637420 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637439 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637515 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637548 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637627 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfjp\" (UniqueName: \"kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.637680 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.638272 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.638346 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.638948 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.639339 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.639454 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:53746->10.217.0.174:9311: read: connection reset by peer" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.639466 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-bc4cd6f78-4rzjr" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.174:9311/healthcheck\": read tcp 10.217.0.2:53740->10.217.0.174:9311: read: connection reset by peer" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.639695 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.664675 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfjp\" (UniqueName: \"kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp\") pod \"dnsmasq-dns-5c9776ccc5-lftzq\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b8t9\" (UniqueName: \"kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739093 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739117 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739361 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739533 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.739794 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.745834 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.763807 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.771048 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.788182 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.790697 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b8t9\" (UniqueName: \"kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9\") pod \"neutron-777b956f44-6v6r5\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.927884 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:25 crc kubenswrapper[4806]: I1125 15:16:25.955772 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-klt6q" podUID="f2488169-196d-4613-aa80-ab2e7a49bfa9" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: i/o timeout" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.338039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66468c84c9-dpswk" event={"ID":"9cc24510-0ee6-451a-ae1e-6c057d860972","Type":"ContainerStarted","Data":"9987e03e2203ff30bbe3225b3bbcda0513803121fc97ffb9cfadeff86c05db6f"} Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.338361 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-66468c84c9-dpswk" event={"ID":"9cc24510-0ee6-451a-ae1e-6c057d860972","Type":"ContainerStarted","Data":"1b2ec7310d936b0a3cbc6b7acad9f944dce158e2ce14bb12939d4d6eab8f9ce4"} Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.348824 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3e62db5f-8827-474f-9dc5-654aaa347996","Type":"ContainerStarted","Data":"95c9f8c08f8822c0ab9233ecad8b6f0a3a127e4380320d1d90db755889a6adb3"} Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.379265 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-66468c84c9-dpswk" podStartSLOduration=3.587271624 podStartE2EDuration="39.3792489s" podCreationTimestamp="2025-11-25 15:15:47 +0000 UTC" firstStartedPulling="2025-11-25 15:15:49.241458489 +0000 UTC m=+1381.893600900" lastFinishedPulling="2025-11-25 15:16:25.033435755 +0000 UTC m=+1417.685578176" observedRunningTime="2025-11-25 15:16:26.378096257 +0000 UTC m=+1419.030238698" watchObservedRunningTime="2025-11-25 15:16:26.3792489 +0000 UTC m=+1419.031391311" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.392576 4806 generic.go:334] "Generic (PLEG): container finished" podID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerID="630a12fc3addd6852a2a6a136c9247c411b8f766277d959736458ba87a3d8f2d" exitCode=0 Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.392620 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerDied","Data":"630a12fc3addd6852a2a6a136c9247c411b8f766277d959736458ba87a3d8f2d"} Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.427433 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.586350 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle\") pod \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.586813 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs\") pod \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.586858 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom\") pod \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.586918 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws9vm\" (UniqueName: \"kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm\") pod \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.587017 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data\") pod \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\" (UID: \"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9\") " Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.590967 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs" (OuterVolumeSpecName: "logs") pod "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" (UID: "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.609600 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm" (OuterVolumeSpecName: "kube-api-access-ws9vm") pod "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" (UID: "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9"). InnerVolumeSpecName "kube-api-access-ws9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.630625 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" (UID: "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.665551 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" (UID: "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.666499 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data" (OuterVolumeSpecName: "config-data") pod "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" (UID: "f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.691091 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.691429 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.691443 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ws9vm\" (UniqueName: \"kubernetes.io/projected/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-kube-api-access-ws9vm\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.691454 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.691468 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:26 crc kubenswrapper[4806]: I1125 15:16:26.724681 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.181358 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:16:27 crc kubenswrapper[4806]: W1125 15:16:27.202462 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23ba80fd_113a_4a97_bca6_2348a1aa4917.slice/crio-a3c178a3c5961b4ed247877b789c7a2716482c1cec3bf4c2039a1e60de34eb1e WatchSource:0}: Error finding container a3c178a3c5961b4ed247877b789c7a2716482c1cec3bf4c2039a1e60de34eb1e: Status 404 returned error can't find the container with id a3c178a3c5961b4ed247877b789c7a2716482c1cec3bf4c2039a1e60de34eb1e Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.431290 4806 generic.go:334] "Generic (PLEG): container finished" podID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerID="6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df" exitCode=0 Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.433021 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" event={"ID":"05f719ae-33a1-44c1-9f80-2d7f644e34c2","Type":"ContainerDied","Data":"6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df"} Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.433173 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" event={"ID":"05f719ae-33a1-44c1-9f80-2d7f644e34c2","Type":"ContainerStarted","Data":"3443b8ede70dbd5c011bb4b59557d6f2d7b4b10096d23be2d64ef616e02b21ea"} Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.442606 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerStarted","Data":"a3c178a3c5961b4ed247877b789c7a2716482c1cec3bf4c2039a1e60de34eb1e"} Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.477350 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerStarted","Data":"3dadd536152ccd090805434d48bce396dcca41e33f0b6e8426842303b2e6edff"} Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.477880 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.492954 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-bc4cd6f78-4rzjr" event={"ID":"f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9","Type":"ContainerDied","Data":"ec5dfd9c4d49d9530649880e20ff151d2bf49e7d5955fdda1672585faed0d66d"} Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.493263 4806 scope.go:117] "RemoveContainer" containerID="630a12fc3addd6852a2a6a136c9247c411b8f766277d959736458ba87a3d8f2d" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.493660 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-bc4cd6f78-4rzjr" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.587034 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.104397543 podStartE2EDuration="16.587010094s" podCreationTimestamp="2025-11-25 15:16:11 +0000 UTC" firstStartedPulling="2025-11-25 15:16:13.755074017 +0000 UTC m=+1406.407216428" lastFinishedPulling="2025-11-25 15:16:26.237686568 +0000 UTC m=+1418.889828979" observedRunningTime="2025-11-25 15:16:27.521975671 +0000 UTC m=+1420.174118092" watchObservedRunningTime="2025-11-25 15:16:27.587010094 +0000 UTC m=+1420.239152515" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.758477 4806 scope.go:117] "RemoveContainer" containerID="b1b70a597ba84d4e4a1ca0c891dfc9390ed5a69ef5642c86be36e3ce9f73ad6d" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.771105 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.771227 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.789189 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.789289 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.816211 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.875719 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.891152 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-bc4cd6f78-4rzjr"] Nov 25 15:16:27 crc kubenswrapper[4806]: I1125 15:16:27.961831 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.122611 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" path="/var/lib/kubelet/pods/f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9/volumes" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.304991 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.314577 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.176:8080/\": dial tcp 10.217.0.176:8080: connect: connection refused" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.545896 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" event={"ID":"05f719ae-33a1-44c1-9f80-2d7f644e34c2","Type":"ContainerStarted","Data":"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565"} Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.547596 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.558242 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerStarted","Data":"a942ad3f505747fa608ab453fe618393954bc7f8eef61b1a305b5ef9d5505032"} Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.558279 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerStarted","Data":"1cedb05810f06eea5884c14673600d408d6e60bc9da95e0848407dc26166bd52"} Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.558292 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.589440 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" podStartSLOduration=3.589423618 podStartE2EDuration="3.589423618s" podCreationTimestamp="2025-11-25 15:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:28.583835959 +0000 UTC m=+1421.235978370" watchObservedRunningTime="2025-11-25 15:16:28.589423618 +0000 UTC m=+1421.241566029" Nov 25 15:16:28 crc kubenswrapper[4806]: I1125 15:16:28.640255 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-777b956f44-6v6r5" podStartSLOduration=3.640233736 podStartE2EDuration="3.640233736s" podCreationTimestamp="2025-11-25 15:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:28.622697426 +0000 UTC m=+1421.274839847" watchObservedRunningTime="2025-11-25 15:16:28.640233736 +0000 UTC m=+1421.292376147" Nov 25 15:16:29 crc kubenswrapper[4806]: I1125 15:16:29.884198 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:16:29 crc kubenswrapper[4806]: I1125 15:16:29.886263 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.595225 4806 generic.go:334] "Generic (PLEG): container finished" podID="c2503ad9-21ed-44c9-ae5a-25307c751865" containerID="5398fc780dd3f6e0342d1fa9cf2d3a259707ea0309bf1888b0e68c8e77508657" exitCode=0 Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.596671 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-drlb4" event={"ID":"c2503ad9-21ed-44c9-ae5a-25307c751865","Type":"ContainerDied","Data":"5398fc780dd3f6e0342d1fa9cf2d3a259707ea0309bf1888b0e68c8e77508657"} Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.645365 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5546966469-bclkx"] Nov 25 15:16:30 crc kubenswrapper[4806]: E1125 15:16:30.645903 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.645921 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" Nov 25 15:16:30 crc kubenswrapper[4806]: E1125 15:16:30.645935 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.645943 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.646147 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.646166 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d0aed9-5cf7-4eb1-9df2-1c2b42a526e9" containerName="barbican-api-log" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.647272 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.661143 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5546966469-bclkx"] Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.662763 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.663007 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-ovndb-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695334 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-combined-ca-bundle\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695434 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-public-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-httpd-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695611 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pkgv\" (UniqueName: \"kubernetes.io/projected/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-kube-api-access-6pkgv\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-internal-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.695698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.797713 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-public-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.797792 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-httpd-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.797860 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pkgv\" (UniqueName: \"kubernetes.io/projected/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-kube-api-access-6pkgv\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.797901 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-internal-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.797933 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.798022 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-ovndb-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.798052 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-combined-ca-bundle\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.808722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-ovndb-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.809354 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-combined-ca-bundle\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.829041 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-public-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.829946 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-internal-tls-certs\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.834594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.835237 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pkgv\" (UniqueName: \"kubernetes.io/projected/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-kube-api-access-6pkgv\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.835378 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c1bd1be-9aa3-4444-a30c-1a3926c79b49-httpd-config\") pod \"neutron-5546966469-bclkx\" (UID: \"5c1bd1be-9aa3-4444-a30c-1a3926c79b49\") " pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.955133 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:16:30 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:16:30 crc kubenswrapper[4806]: > Nov 25 15:16:30 crc kubenswrapper[4806]: I1125 15:16:30.984269 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:31 crc kubenswrapper[4806]: I1125 15:16:31.772932 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5546966469-bclkx"] Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.139864 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.253257 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts\") pod \"c2503ad9-21ed-44c9-ae5a-25307c751865\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.253384 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs\") pod \"c2503ad9-21ed-44c9-ae5a-25307c751865\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.253419 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data\") pod \"c2503ad9-21ed-44c9-ae5a-25307c751865\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.253564 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdpx6\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6\") pod \"c2503ad9-21ed-44c9-ae5a-25307c751865\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.253589 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle\") pod \"c2503ad9-21ed-44c9-ae5a-25307c751865\" (UID: \"c2503ad9-21ed-44c9-ae5a-25307c751865\") " Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.259420 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts" (OuterVolumeSpecName: "scripts") pod "c2503ad9-21ed-44c9-ae5a-25307c751865" (UID: "c2503ad9-21ed-44c9-ae5a-25307c751865"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.264563 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6" (OuterVolumeSpecName: "kube-api-access-xdpx6") pod "c2503ad9-21ed-44c9-ae5a-25307c751865" (UID: "c2503ad9-21ed-44c9-ae5a-25307c751865"). InnerVolumeSpecName "kube-api-access-xdpx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.288945 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs" (OuterVolumeSpecName: "certs") pod "c2503ad9-21ed-44c9-ae5a-25307c751865" (UID: "c2503ad9-21ed-44c9-ae5a-25307c751865"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.303931 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2503ad9-21ed-44c9-ae5a-25307c751865" (UID: "c2503ad9-21ed-44c9-ae5a-25307c751865"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.311630 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data" (OuterVolumeSpecName: "config-data") pod "c2503ad9-21ed-44c9-ae5a-25307c751865" (UID: "c2503ad9-21ed-44c9-ae5a-25307c751865"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.356271 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdpx6\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-kube-api-access-xdpx6\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.356340 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.356353 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.356386 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c2503ad9-21ed-44c9-ae5a-25307c751865-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.356400 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2503ad9-21ed-44c9-ae5a-25307c751865-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.620773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5546966469-bclkx" event={"ID":"5c1bd1be-9aa3-4444-a30c-1a3926c79b49","Type":"ContainerStarted","Data":"2392a0593785e22e0b188a1d5ccd8c0c51459857f58a783ebcfbedb76305ea06"} Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.620817 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5546966469-bclkx" event={"ID":"5c1bd1be-9aa3-4444-a30c-1a3926c79b49","Type":"ContainerStarted","Data":"36d58b4513b34ffcf4a7f2a6fdb7dbfb9faf2a909e1be830f433ef2ba07632c0"} Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.620828 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5546966469-bclkx" event={"ID":"5c1bd1be-9aa3-4444-a30c-1a3926c79b49","Type":"ContainerStarted","Data":"42af8e7e2abfbe191e5fc9654c2425214c6349144469331b0c6f61e9b10a1dd2"} Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.622308 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5546966469-bclkx" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.630210 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-drlb4" event={"ID":"c2503ad9-21ed-44c9-ae5a-25307c751865","Type":"ContainerDied","Data":"0df96551f7544682e32e2cfb8cee323c6ae5223a7c0e1683a576d619965104d5"} Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.630248 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0df96551f7544682e32e2cfb8cee323c6ae5223a7c0e1683a576d619965104d5" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.630303 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-drlb4" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.672193 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5546966469-bclkx" podStartSLOduration=2.672174468 podStartE2EDuration="2.672174468s" podCreationTimestamp="2025-11-25 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:32.656731565 +0000 UTC m=+1425.308873976" watchObservedRunningTime="2025-11-25 15:16:32.672174468 +0000 UTC m=+1425.324316879" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.725535 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-khx7z"] Nov 25 15:16:32 crc kubenswrapper[4806]: E1125 15:16:32.726254 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" containerName="cloudkitty-db-sync" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.726284 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" containerName="cloudkitty-db-sync" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.726505 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" containerName="cloudkitty-db-sync" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.727367 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.730935 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-dqwtc" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.731104 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.731378 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.731552 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.731578 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.751814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-khx7z"] Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.785299 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdqn\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.785396 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.785456 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.785642 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.785717 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.888055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.888208 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.888353 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdqn\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.888431 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.888461 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.913513 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.914725 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.934277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.942108 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdqn\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:32 crc kubenswrapper[4806]: I1125 15:16:32.952420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs\") pod \"cloudkitty-storageinit-khx7z\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:33 crc kubenswrapper[4806]: I1125 15:16:33.089810 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:33 crc kubenswrapper[4806]: I1125 15:16:33.218660 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="d875dfe1-f943-4577-afd4-e301920efac6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.180:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:33 crc kubenswrapper[4806]: I1125 15:16:33.658492 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-khx7z"] Nov 25 15:16:33 crc kubenswrapper[4806]: I1125 15:16:33.846161 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 15:16:33 crc kubenswrapper[4806]: I1125 15:16:33.907924 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.657497 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-khx7z" event={"ID":"7aaf07d8-e5c5-4119-9d4a-df8d6c296541","Type":"ContainerStarted","Data":"63213603e00965e9462d2d20b54f42e994509ed0cdfaf078ae93783aa6203c46"} Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.657800 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-khx7z" event={"ID":"7aaf07d8-e5c5-4119-9d4a-df8d6c296541","Type":"ContainerStarted","Data":"d5965a1e737f326cf4a5198c2fe2c76e631a95056445b6a8595e72c099ad4cbe"} Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.657978 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="cinder-scheduler" containerID="cri-o://c08ba412b5d4d33ac6ee7c89d112c6de84041ad33172d269b029b4c8fd2bd177" gracePeriod=30 Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.658249 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="probe" containerID="cri-o://c1e4c54e37651b2c1357e47818fd8913f1f44f7dcb8d652d14ffb66ea69f813f" gracePeriod=30 Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.699178 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-khx7z" podStartSLOduration=2.699159814 podStartE2EDuration="2.699159814s" podCreationTimestamp="2025-11-25 15:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:34.69135879 +0000 UTC m=+1427.343501201" watchObservedRunningTime="2025-11-25 15:16:34.699159814 +0000 UTC m=+1427.351302225" Nov 25 15:16:34 crc kubenswrapper[4806]: I1125 15:16:34.987923 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.328913 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d6dfc6f67-wrhhk"] Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.331029 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.333528 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.333755 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.335143 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.338300 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d6dfc6f67-wrhhk"] Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.459676 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-run-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.459740 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-public-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.459928 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-combined-ca-bundle\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.459988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdp2\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-kube-api-access-5vdp2\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.460076 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-etc-swift\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.460092 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-internal-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.460183 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-log-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.460249 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-config-data\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562107 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-combined-ca-bundle\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562156 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vdp2\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-kube-api-access-5vdp2\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562200 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-etc-swift\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562217 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-internal-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562256 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-log-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562291 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-config-data\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-run-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.562393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-public-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.563128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-log-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.563368 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-run-httpd\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.568732 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.569061 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-central-agent" containerID="cri-o://03e8b66fdf8d9d452e1e616471c23a4846207170a6bb46c424d699c2d94f5406" gracePeriod=30 Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.569703 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="proxy-httpd" containerID="cri-o://3dadd536152ccd090805434d48bce396dcca41e33f0b6e8426842303b2e6edff" gracePeriod=30 Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.569756 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-notification-agent" containerID="cri-o://41056375e94d63baab11e0d758ce2ed64f7dcbea88b2ce4184f26769e583997e" gracePeriod=30 Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.569899 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="sg-core" containerID="cri-o://ca0c5fc3f594273c5fb2061ac15bf76d4f4205f801d64dbef1312cfff9416555" gracePeriod=30 Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.573696 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-internal-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.577403 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-config-data\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.578164 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-public-tls-certs\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.578483 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-etc-swift\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.581601 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-combined-ca-bundle\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.598408 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vdp2\" (UniqueName: \"kubernetes.io/projected/3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0-kube-api-access-5vdp2\") pod \"swift-proxy-6d6dfc6f67-wrhhk\" (UID: \"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0\") " pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.715871 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.742352 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.833759 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:35 crc kubenswrapper[4806]: I1125 15:16:35.834355 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="dnsmasq-dns" containerID="cri-o://994a7cfbb166f45f1789535926040258d123ff9f94f3cf83dd997b625595cf04" gracePeriod=10 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.696627 4806 generic.go:334] "Generic (PLEG): container finished" podID="28955e10-67f1-4268-b7e2-e7851398b376" containerID="994a7cfbb166f45f1789535926040258d123ff9f94f3cf83dd997b625595cf04" exitCode=0 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.697103 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" event={"ID":"28955e10-67f1-4268-b7e2-e7851398b376","Type":"ContainerDied","Data":"994a7cfbb166f45f1789535926040258d123ff9f94f3cf83dd997b625595cf04"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706656 4806 generic.go:334] "Generic (PLEG): container finished" podID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerID="3dadd536152ccd090805434d48bce396dcca41e33f0b6e8426842303b2e6edff" exitCode=0 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706690 4806 generic.go:334] "Generic (PLEG): container finished" podID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerID="ca0c5fc3f594273c5fb2061ac15bf76d4f4205f801d64dbef1312cfff9416555" exitCode=2 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706698 4806 generic.go:334] "Generic (PLEG): container finished" podID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerID="41056375e94d63baab11e0d758ce2ed64f7dcbea88b2ce4184f26769e583997e" exitCode=0 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706707 4806 generic.go:334] "Generic (PLEG): container finished" podID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerID="03e8b66fdf8d9d452e1e616471c23a4846207170a6bb46c424d699c2d94f5406" exitCode=0 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706753 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerDied","Data":"3dadd536152ccd090805434d48bce396dcca41e33f0b6e8426842303b2e6edff"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706781 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerDied","Data":"ca0c5fc3f594273c5fb2061ac15bf76d4f4205f801d64dbef1312cfff9416555"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706790 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerDied","Data":"41056375e94d63baab11e0d758ce2ed64f7dcbea88b2ce4184f26769e583997e"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.706799 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerDied","Data":"03e8b66fdf8d9d452e1e616471c23a4846207170a6bb46c424d699c2d94f5406"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.714497 4806 generic.go:334] "Generic (PLEG): container finished" podID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerID="c1e4c54e37651b2c1357e47818fd8913f1f44f7dcb8d652d14ffb66ea69f813f" exitCode=0 Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.714532 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerDied","Data":"c1e4c54e37651b2c1357e47818fd8913f1f44f7dcb8d652d14ffb66ea69f813f"} Nov 25 15:16:36 crc kubenswrapper[4806]: I1125 15:16:36.749516 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d6dfc6f67-wrhhk"] Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.065848 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114305 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsdw6\" (UniqueName: \"kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114443 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114525 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114545 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.114598 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config\") pod \"28955e10-67f1-4268-b7e2-e7851398b376\" (UID: \"28955e10-67f1-4268-b7e2-e7851398b376\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.143406 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6" (OuterVolumeSpecName: "kube-api-access-jsdw6") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "kube-api-access-jsdw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.217595 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsdw6\" (UniqueName: \"kubernetes.io/projected/28955e10-67f1-4268-b7e2-e7851398b376-kube-api-access-jsdw6\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.342386 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.392673 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.405919 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config" (OuterVolumeSpecName: "config") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.420552 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.427766 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.428644 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.428990 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.429161 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.429233 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.429552 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.430480 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfpvh\" (UniqueName: \"kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh\") pod \"75f75cce-0bb5-4617-8f28-29a95214ce33\" (UID: \"75f75cce-0bb5-4617-8f28-29a95214ce33\") " Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.432356 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.432396 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.432407 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.439869 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.440818 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.444209 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh" (OuterVolumeSpecName: "kube-api-access-lfpvh") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "kube-api-access-lfpvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.465598 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts" (OuterVolumeSpecName: "scripts") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.539678 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfpvh\" (UniqueName: \"kubernetes.io/projected/75f75cce-0bb5-4617-8f28-29a95214ce33-kube-api-access-lfpvh\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.539713 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.539722 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/75f75cce-0bb5-4617-8f28-29a95214ce33-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.539730 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.574728 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.578254 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28955e10-67f1-4268-b7e2-e7851398b376" (UID: "28955e10-67f1-4268-b7e2-e7851398b376"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.599470 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.657586 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.657615 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28955e10-67f1-4268-b7e2-e7851398b376-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.657625 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.745899 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" event={"ID":"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0","Type":"ContainerStarted","Data":"0ca2ee1b298dd0fd023958fba7ce238d77f8a2901f91327e128e28c4c3358705"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.746245 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" event={"ID":"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0","Type":"ContainerStarted","Data":"aa65a00c904ca1d900dd87f33010bfb3ad1347df8254a12cdab63cc2e0286a50"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.758293 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.766228 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" event={"ID":"28955e10-67f1-4268-b7e2-e7851398b376","Type":"ContainerDied","Data":"507fe005c099989eecddda0a42f8e04e12984c267526b9b625b2dde07b33d251"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.766278 4806 scope.go:117] "RemoveContainer" containerID="994a7cfbb166f45f1789535926040258d123ff9f94f3cf83dd997b625595cf04" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.766429 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-795f4db4bc-9vs9k" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.768123 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.778717 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"75f75cce-0bb5-4617-8f28-29a95214ce33","Type":"ContainerDied","Data":"fde047e84d665fe861a31df0efc49aa1e5b441be8237f0bbfdd2bab3a97bfb2c"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.778819 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.782612 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data" (OuterVolumeSpecName: "config-data") pod "75f75cce-0bb5-4617-8f28-29a95214ce33" (UID: "75f75cce-0bb5-4617-8f28-29a95214ce33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.785884 4806 generic.go:334] "Generic (PLEG): container finished" podID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerID="c08ba412b5d4d33ac6ee7c89d112c6de84041ad33172d269b029b4c8fd2bd177" exitCode=0 Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.785920 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerDied","Data":"c08ba412b5d4d33ac6ee7c89d112c6de84041ad33172d269b029b4c8fd2bd177"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.785947 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f1a35d44-1052-4c49-8bc7-c0cb3b038efd","Type":"ContainerDied","Data":"88009ee120d1188a07ecef63fe7d727e1f0121abdd214f21b63e20a1d597fa53"} Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.785957 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88009ee120d1188a07ecef63fe7d727e1f0121abdd214f21b63e20a1d597fa53" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.870505 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75f75cce-0bb5-4617-8f28-29a95214ce33-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.945919 4806 scope.go:117] "RemoveContainer" containerID="f5f05655b9c7f0f9914024f50ae178f561dfc97bd107f899e0b6eb76e725f492" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.950409 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.968214 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.987621 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-795f4db4bc-9vs9k"] Nov 25 15:16:37 crc kubenswrapper[4806]: I1125 15:16:37.987970 4806 scope.go:117] "RemoveContainer" containerID="3dadd536152ccd090805434d48bce396dcca41e33f0b6e8426842303b2e6edff" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.034717 4806 scope.go:117] "RemoveContainer" containerID="ca0c5fc3f594273c5fb2061ac15bf76d4f4205f801d64dbef1312cfff9416555" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074023 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074148 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074381 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074475 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cgdz\" (UniqueName: \"kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074514 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.074561 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data\") pod \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\" (UID: \"f1a35d44-1052-4c49-8bc7-c0cb3b038efd\") " Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.076504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.083504 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts" (OuterVolumeSpecName: "scripts") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.093518 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.096514 4806 scope.go:117] "RemoveContainer" containerID="41056375e94d63baab11e0d758ce2ed64f7dcbea88b2ce4184f26769e583997e" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.098070 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz" (OuterVolumeSpecName: "kube-api-access-8cgdz") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "kube-api-access-8cgdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.125244 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28955e10-67f1-4268-b7e2-e7851398b376" path="/var/lib/kubelet/pods/28955e10-67f1-4268-b7e2-e7851398b376/volumes" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.151974 4806 scope.go:117] "RemoveContainer" containerID="03e8b66fdf8d9d452e1e616471c23a4846207170a6bb46c424d699c2d94f5406" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.154580 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.177291 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.177344 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.177355 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.177363 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cgdz\" (UniqueName: \"kubernetes.io/projected/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-kube-api-access-8cgdz\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.177373 4806 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.203944 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.209635 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.235550 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236603 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="init" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236627 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="init" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236652 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="proxy-httpd" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236662 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="proxy-httpd" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236683 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-notification-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236690 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-notification-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236702 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-central-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236708 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-central-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236723 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="cinder-scheduler" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236731 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="cinder-scheduler" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236748 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="sg-core" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236757 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="sg-core" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236776 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="dnsmasq-dns" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236784 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="dnsmasq-dns" Nov 25 15:16:38 crc kubenswrapper[4806]: E1125 15:16:38.236797 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="probe" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.236804 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="probe" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237060 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="sg-core" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237085 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="proxy-httpd" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237100 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="28955e10-67f1-4268-b7e2-e7851398b376" containerName="dnsmasq-dns" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237112 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-notification-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237131 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="cinder-scheduler" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237146 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" containerName="probe" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.237164 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" containerName="ceilometer-central-agent" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.241890 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.248100 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.248272 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.263198 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.276806 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data" (OuterVolumeSpecName: "config-data") pod "f1a35d44-1052-4c49-8bc7-c0cb3b038efd" (UID: "f1a35d44-1052-4c49-8bc7-c0cb3b038efd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.277814 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="d875dfe1-f943-4577-afd4-e301920efac6" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.180:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.283663 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a35d44-1052-4c49-8bc7-c0cb3b038efd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.385400 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.385748 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.385856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.385908 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.385974 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.386033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsf7m\" (UniqueName: \"kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.386093 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487521 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsf7m\" (UniqueName: \"kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487579 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487651 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487673 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487755 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487786 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.487824 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.488769 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.491289 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.495531 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.495824 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.497269 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.499103 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.510718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsf7m\" (UniqueName: \"kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m\") pod \"ceilometer-0\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.582126 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.799431 4806 generic.go:334] "Generic (PLEG): container finished" podID="7aaf07d8-e5c5-4119-9d4a-df8d6c296541" containerID="63213603e00965e9462d2d20b54f42e994509ed0cdfaf078ae93783aa6203c46" exitCode=0 Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.799498 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-khx7z" event={"ID":"7aaf07d8-e5c5-4119-9d4a-df8d6c296541","Type":"ContainerDied","Data":"63213603e00965e9462d2d20b54f42e994509ed0cdfaf078ae93783aa6203c46"} Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.805622 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" event={"ID":"3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0","Type":"ContainerStarted","Data":"9af01a4c79d0cbfb23ea12081492ed254c935695c463e651d97543ff5fd738f0"} Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.805689 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.806393 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.829779 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.890183 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" podStartSLOduration=3.890161636 podStartE2EDuration="3.890161636s" podCreationTimestamp="2025-11-25 15:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:38.840947674 +0000 UTC m=+1431.493090095" watchObservedRunningTime="2025-11-25 15:16:38.890161636 +0000 UTC m=+1431.542304067" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.918958 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.948866 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.962381 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.964214 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.966815 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 15:16:38 crc kubenswrapper[4806]: I1125 15:16:38.984120 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.099881 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.099956 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.099987 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ctkb\" (UniqueName: \"kubernetes.io/projected/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-kube-api-access-7ctkb\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.100035 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.100055 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.100088 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202281 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202349 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202374 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202467 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202510 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.202535 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ctkb\" (UniqueName: \"kubernetes.io/projected/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-kube-api-access-7ctkb\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.203238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.214475 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.214784 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-scripts\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.216043 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.241546 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-config-data\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.251967 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ctkb\" (UniqueName: \"kubernetes.io/projected/a6efd5be-f7be-4981-aa85-710e9a0b3dc7-kube-api-access-7ctkb\") pod \"cinder-scheduler-0\" (UID: \"a6efd5be-f7be-4981-aa85-710e9a0b3dc7\") " pod="openstack/cinder-scheduler-0" Nov 25 15:16:39 crc kubenswrapper[4806]: I1125 15:16:39.301890 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:16:40 crc kubenswrapper[4806]: I1125 15:16:40.109516 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75f75cce-0bb5-4617-8f28-29a95214ce33" path="/var/lib/kubelet/pods/75f75cce-0bb5-4617-8f28-29a95214ce33/volumes" Nov 25 15:16:40 crc kubenswrapper[4806]: I1125 15:16:40.111754 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a35d44-1052-4c49-8bc7-c0cb3b038efd" path="/var/lib/kubelet/pods/f1a35d44-1052-4c49-8bc7-c0cb3b038efd/volumes" Nov 25 15:16:40 crc kubenswrapper[4806]: I1125 15:16:40.989832 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:16:40 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:16:40 crc kubenswrapper[4806]: > Nov 25 15:16:41 crc kubenswrapper[4806]: I1125 15:16:41.942606 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:43 crc kubenswrapper[4806]: I1125 15:16:43.663915 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:43 crc kubenswrapper[4806]: I1125 15:16:43.664525 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-log" containerID="cri-o://e98a613094a0823be37da0b1e6741b26dddee757216a105b12e0ee17f23a1186" gracePeriod=30 Nov 25 15:16:43 crc kubenswrapper[4806]: I1125 15:16:43.665099 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-httpd" containerID="cri-o://4d36056d5a652030f0de6da870de00f5050e9b3e3e536651a9e06fe84ed3ce6f" gracePeriod=30 Nov 25 15:16:43 crc kubenswrapper[4806]: I1125 15:16:43.898147 4806 generic.go:334] "Generic (PLEG): container finished" podID="359539be-7a7d-48d3-8738-83765f897fa4" containerID="e98a613094a0823be37da0b1e6741b26dddee757216a105b12e0ee17f23a1186" exitCode=143 Nov 25 15:16:43 crc kubenswrapper[4806]: I1125 15:16:43.898406 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerDied","Data":"e98a613094a0823be37da0b1e6741b26dddee757216a105b12e0ee17f23a1186"} Nov 25 15:16:45 crc kubenswrapper[4806]: I1125 15:16:45.726783 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:45 crc kubenswrapper[4806]: I1125 15:16:45.728252 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d6dfc6f67-wrhhk" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.148164 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.207206 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts\") pod \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.207657 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle\") pod \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.208720 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs\") pod \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.208853 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qdqn\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn\") pod \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.209003 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data\") pod \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\" (UID: \"7aaf07d8-e5c5-4119-9d4a-df8d6c296541\") " Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.216428 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs" (OuterVolumeSpecName: "certs") pod "7aaf07d8-e5c5-4119-9d4a-df8d6c296541" (UID: "7aaf07d8-e5c5-4119-9d4a-df8d6c296541"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.224904 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts" (OuterVolumeSpecName: "scripts") pod "7aaf07d8-e5c5-4119-9d4a-df8d6c296541" (UID: "7aaf07d8-e5c5-4119-9d4a-df8d6c296541"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.232158 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn" (OuterVolumeSpecName: "kube-api-access-8qdqn") pod "7aaf07d8-e5c5-4119-9d4a-df8d6c296541" (UID: "7aaf07d8-e5c5-4119-9d4a-df8d6c296541"). InnerVolumeSpecName "kube-api-access-8qdqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.275970 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7aaf07d8-e5c5-4119-9d4a-df8d6c296541" (UID: "7aaf07d8-e5c5-4119-9d4a-df8d6c296541"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.280845 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data" (OuterVolumeSpecName: "config-data") pod "7aaf07d8-e5c5-4119-9d4a-df8d6c296541" (UID: "7aaf07d8-e5c5-4119-9d4a-df8d6c296541"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.315686 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.315736 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.315747 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.315761 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.315773 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qdqn\" (UniqueName: \"kubernetes.io/projected/7aaf07d8-e5c5-4119-9d4a-df8d6c296541-kube-api-access-8qdqn\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.524420 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:46 crc kubenswrapper[4806]: W1125 15:16:46.645139 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6efd5be_f7be_4981_aa85_710e9a0b3dc7.slice/crio-5341b3121736cd50edf407f9655f8c34be20b07a5dcb983cf2e803d3a45acef0 WatchSource:0}: Error finding container 5341b3121736cd50edf407f9655f8c34be20b07a5dcb983cf2e803d3a45acef0: Status 404 returned error can't find the container with id 5341b3121736cd50edf407f9655f8c34be20b07a5dcb983cf2e803d3a45acef0 Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.648389 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.936227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a6efd5be-f7be-4981-aa85-710e9a0b3dc7","Type":"ContainerStarted","Data":"5341b3121736cd50edf407f9655f8c34be20b07a5dcb983cf2e803d3a45acef0"} Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.939027 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"3e62db5f-8827-474f-9dc5-654aaa347996","Type":"ContainerStarted","Data":"fe37fe629d04bf5396fe9d6c1dd43e5eb3b907f84cfcc7aa2d91d579c732aa70"} Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.943374 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-khx7z" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.943415 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-khx7z" event={"ID":"7aaf07d8-e5c5-4119-9d4a-df8d6c296541","Type":"ContainerDied","Data":"d5965a1e737f326cf4a5198c2fe2c76e631a95056445b6a8595e72c099ad4cbe"} Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.943463 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5965a1e737f326cf4a5198c2fe2c76e631a95056445b6a8595e72c099ad4cbe" Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.949168 4806 generic.go:334] "Generic (PLEG): container finished" podID="359539be-7a7d-48d3-8738-83765f897fa4" containerID="4d36056d5a652030f0de6da870de00f5050e9b3e3e536651a9e06fe84ed3ce6f" exitCode=0 Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.949244 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerDied","Data":"4d36056d5a652030f0de6da870de00f5050e9b3e3e536651a9e06fe84ed3ce6f"} Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.952228 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerStarted","Data":"4325d635abde4019d1c07cec8d2275ada327a20e2ccee07e83ad7c6f57900749"} Nov 25 15:16:46 crc kubenswrapper[4806]: I1125 15:16:46.956829 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.9481983120000002 podStartE2EDuration="22.956812623s" podCreationTimestamp="2025-11-25 15:16:24 +0000 UTC" firstStartedPulling="2025-11-25 15:16:25.200082532 +0000 UTC m=+1417.852224943" lastFinishedPulling="2025-11-25 15:16:46.208696843 +0000 UTC m=+1438.860839254" observedRunningTime="2025-11-25 15:16:46.955393372 +0000 UTC m=+1439.607535783" watchObservedRunningTime="2025-11-25 15:16:46.956812623 +0000 UTC m=+1439.608955034" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.366840 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:16:47 crc kubenswrapper[4806]: E1125 15:16:47.367458 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7aaf07d8-e5c5-4119-9d4a-df8d6c296541" containerName="cloudkitty-storageinit" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.367473 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7aaf07d8-e5c5-4119-9d4a-df8d6c296541" containerName="cloudkitty-storageinit" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.371050 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7aaf07d8-e5c5-4119-9d4a-df8d6c296541" containerName="cloudkitty-storageinit" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.371827 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.376707 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.376911 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.377026 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.384884 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-dqwtc" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.384957 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.388673 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.440855 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.440894 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.441117 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68r6\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.441146 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.441183 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.441227 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.469326 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.471006 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.497069 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.546538 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.546788 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.546869 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.546968 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547048 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547240 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68r6\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547345 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnpjk\" (UniqueName: \"kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547419 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547508 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547648 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.547730 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.562052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.562539 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.566747 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.568110 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.588937 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.590084 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.598069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68r6\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6\") pod \"cloudkitty-proc-0\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.612555 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:47 crc kubenswrapper[4806]: E1125 15:16:47.613037 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-log" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.613058 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-log" Nov 25 15:16:47 crc kubenswrapper[4806]: E1125 15:16:47.613082 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-httpd" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.613087 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-httpd" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.613283 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-log" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.613335 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-httpd" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.614477 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.616629 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.626838 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.648935 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.649286 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651532 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651690 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651714 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651779 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n42kt\" (UniqueName: \"kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651838 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.651873 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run\") pod \"359539be-7a7d-48d3-8738-83765f897fa4\" (UID: \"359539be-7a7d-48d3-8738-83765f897fa4\") " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652143 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652256 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652360 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652412 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652458 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652539 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnpjk\" (UniqueName: \"kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.652654 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs" (OuterVolumeSpecName: "logs") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.656244 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.657519 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.659120 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.661069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.663847 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.664836 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.675648 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt" (OuterVolumeSpecName: "kube-api-access-n42kt") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "kube-api-access-n42kt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.682530 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts" (OuterVolumeSpecName: "scripts") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.691114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnpjk\" (UniqueName: \"kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk\") pod \"dnsmasq-dns-67bdc55879-l8khz\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.719092 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154" (OuterVolumeSpecName: "glance") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "pvc-97d90b05-5a54-40f1-981b-562ae2bfc154". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.739761 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.755697 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.755814 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.755843 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5b5\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.755884 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.755978 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756022 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756121 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756255 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756273 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756286 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/359539be-7a7d-48d3-8738-83765f897fa4-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756329 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") on node \"crc\" " Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756362 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.756374 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n42kt\" (UniqueName: \"kubernetes.io/projected/359539be-7a7d-48d3-8738-83765f897fa4-kube-api-access-n42kt\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.790910 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data" (OuterVolumeSpecName: "config-data") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.795166 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "359539be-7a7d-48d3-8738-83765f897fa4" (UID: "359539be-7a7d-48d3-8738-83765f897fa4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.802403 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.802541 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-97d90b05-5a54-40f1-981b-562ae2bfc154" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154") on node "crc" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858103 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858235 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5b5\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858271 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858365 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858395 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858466 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858582 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858597 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.858613 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359539be-7a7d-48d3-8738-83765f897fa4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.862062 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.863468 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.863904 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.878507 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.878539 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.879155 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.881777 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5b5\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5\") pod \"cloudkitty-api-0\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.887605 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.922468 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:47 crc kubenswrapper[4806]: I1125 15:16:47.980194 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.062630 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.062814 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"359539be-7a7d-48d3-8738-83765f897fa4","Type":"ContainerDied","Data":"6b00c17877626d4d35056df13dad56d31c74d1317e1457240700ddf84cc0ac2c"} Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.065924 4806 scope.go:117] "RemoveContainer" containerID="4d36056d5a652030f0de6da870de00f5050e9b3e3e536651a9e06fe84ed3ce6f" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.099238 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerStarted","Data":"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1"} Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.272912 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a6efd5be-f7be-4981-aa85-710e9a0b3dc7","Type":"ContainerStarted","Data":"fb81d33fdc81039ee12e53497da45cad4a621806638d890742fe81adcb5483d4"} Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.408570 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.422427 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.434710 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.435223 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-log" containerID="cri-o://18234c61d10b2a578b0e7f73ce15bc055485de86ee76ee24627bafff6d25fa84" gracePeriod=30 Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.435761 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-httpd" containerID="cri-o://8cf283dc14763b552a799ea513e1f4146ba5c46d2643284e97c7bca12f49f737" gracePeriod=30 Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.446563 4806 scope.go:117] "RemoveContainer" containerID="e98a613094a0823be37da0b1e6741b26dddee757216a105b12e0ee17f23a1186" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.478420 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.480244 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.483523 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.484104 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.168:9292/healthcheck\": EOF" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.486011 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.486191 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.495117 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.515926 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.591134 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595522 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4bm4\" (UniqueName: \"kubernetes.io/projected/125263e2-6d79-4c36-be67-2dd333e3dff5-kube-api-access-n4bm4\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595616 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595663 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-scripts\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595803 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-config-data\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.595975 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-logs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-scripts\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697639 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-config-data\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697703 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-logs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697755 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697796 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4bm4\" (UniqueName: \"kubernetes.io/projected/125263e2-6d79-4c36-be67-2dd333e3dff5-kube-api-access-n4bm4\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.697870 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.698821 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.701458 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/125263e2-6d79-4c36-be67-2dd333e3dff5-logs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.714591 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.714822 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-scripts\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.714839 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b0d2c8bd947cd04e33b263736a5e66dc40906178a29bfc8a7e651131070b0df8/globalmount\"" pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.716057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.716814 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.719042 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4bm4\" (UniqueName: \"kubernetes.io/projected/125263e2-6d79-4c36-be67-2dd333e3dff5-kube-api-access-n4bm4\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.735985 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.743682 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/125263e2-6d79-4c36-be67-2dd333e3dff5-config-data\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.831069 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-97d90b05-5a54-40f1-981b-562ae2bfc154\") pod \"glance-default-external-api-0\" (UID: \"125263e2-6d79-4c36-be67-2dd333e3dff5\") " pod="openstack/glance-default-external-api-0" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.939933 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.939992 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:16:48 crc kubenswrapper[4806]: I1125 15:16:48.969916 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.117082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.255340 4806 generic.go:334] "Generic (PLEG): container finished" podID="2a56466e-77fd-43df-b5a6-234d90b66334" containerID="18234c61d10b2a578b0e7f73ce15bc055485de86ee76ee24627bafff6d25fa84" exitCode=143 Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.255394 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerDied","Data":"18234c61d10b2a578b0e7f73ce15bc055485de86ee76ee24627bafff6d25fa84"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.290229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a6efd5be-f7be-4981-aa85-710e9a0b3dc7","Type":"ContainerStarted","Data":"3650bbe9cfc059dd3db8f23bdfa6811156b79cfa81c122ee356319c5ac3f8bd9"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.295509 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerStarted","Data":"78a9f791a2a87ac1f7f4adc8842c724c1e8dbddaaec306b9aec72c4e4b5d5125"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.307739 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.312896 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" event={"ID":"79229967-32d3-4ca1-ac03-ab3364d41ca5","Type":"ContainerStarted","Data":"ff8a3d43c1a143f19b1e7db2b37fba4821051ad393783f6b6adbb30865ec9f78"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.329670 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerStarted","Data":"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.333230 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"3de7f512-f839-4abf-9ffa-e7d70ba8eac2","Type":"ContainerStarted","Data":"db8148e96c4d359180db0f393a711a4c4ab5e0aab3783b783561542a346a5db6"} Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.333980 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=11.333959463 podStartE2EDuration="11.333959463s" podCreationTimestamp="2025-11-25 15:16:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:49.314862496 +0000 UTC m=+1441.967004907" watchObservedRunningTime="2025-11-25 15:16:49.333959463 +0000 UTC m=+1441.986101874" Nov 25 15:16:49 crc kubenswrapper[4806]: I1125 15:16:49.985388 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.108301 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359539be-7a7d-48d3-8738-83765f897fa4" path="/var/lib/kubelet/pods/359539be-7a7d-48d3-8738-83765f897fa4/volumes" Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.373538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerStarted","Data":"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee"} Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.373822 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerStarted","Data":"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c"} Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.374004 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.379072 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"125263e2-6d79-4c36-be67-2dd333e3dff5","Type":"ContainerStarted","Data":"6d3bf5dce483d387d77db83243225d1fdeb974d167247af14e4ee48abed0b0f5"} Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.385686 4806 generic.go:334] "Generic (PLEG): container finished" podID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerID="2583c4457fb0ae133d74533fb9aaae6df4529fce6670924887b15e4734050088" exitCode=0 Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.385966 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" event={"ID":"79229967-32d3-4ca1-ac03-ab3364d41ca5","Type":"ContainerDied","Data":"2583c4457fb0ae133d74533fb9aaae6df4529fce6670924887b15e4734050088"} Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.411997 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.411956637 podStartE2EDuration="3.411956637s" podCreationTimestamp="2025-11-25 15:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:50.39952187 +0000 UTC m=+1443.051664281" watchObservedRunningTime="2025-11-25 15:16:50.411956637 +0000 UTC m=+1443.064099048" Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.422815 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerStarted","Data":"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8"} Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.687293 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:50 crc kubenswrapper[4806]: I1125 15:16:50.983295 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:16:50 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:16:50 crc kubenswrapper[4806]: > Nov 25 15:16:51 crc kubenswrapper[4806]: I1125 15:16:51.438452 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"125263e2-6d79-4c36-be67-2dd333e3dff5","Type":"ContainerStarted","Data":"231a5b5dd3503784949a843392e7e833d53dd453c4fa2d310e162f7dd4b71993"} Nov 25 15:16:51 crc kubenswrapper[4806]: I1125 15:16:51.440410 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" event={"ID":"79229967-32d3-4ca1-ac03-ab3364d41ca5","Type":"ContainerStarted","Data":"ca8e378614cc08a95018368575692ddc2ba62111432d44f7b9c5877545aecdc3"} Nov 25 15:16:51 crc kubenswrapper[4806]: I1125 15:16:51.480963 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" podStartSLOduration=4.480940431 podStartE2EDuration="4.480940431s" podCreationTimestamp="2025-11-25 15:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:51.468797802 +0000 UTC m=+1444.120940223" watchObservedRunningTime="2025-11-25 15:16:51.480940431 +0000 UTC m=+1444.133082842" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.454386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"125263e2-6d79-4c36-be67-2dd333e3dff5","Type":"ContainerStarted","Data":"42b62f05548ecd5d4bf6c5822b6d3775f79d92bf3d175b8836d84f80dd96ed95"} Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.457887 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerStarted","Data":"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad"} Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.458000 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-central-agent" containerID="cri-o://67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.458032 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.458045 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="sg-core" containerID="cri-o://dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.458067 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="proxy-httpd" containerID="cri-o://c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.458084 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-notification-agent" containerID="cri-o://2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.459994 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"3de7f512-f839-4abf-9ffa-e7d70ba8eac2","Type":"ContainerStarted","Data":"905d71c1d05052a99a6229a1b8e71d25e32171baa3d1c2c50937c40bdfd49a66"} Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.460020 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api-log" containerID="cri-o://a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.460910 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api" containerID="cri-o://4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" gracePeriod=30 Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.460994 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.481119 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.481099221 podStartE2EDuration="4.481099221s" podCreationTimestamp="2025-11-25 15:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:52.475776848 +0000 UTC m=+1445.127919259" watchObservedRunningTime="2025-11-25 15:16:52.481099221 +0000 UTC m=+1445.133241632" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.520454 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=9.43499112 podStartE2EDuration="14.520434309s" podCreationTimestamp="2025-11-25 15:16:38 +0000 UTC" firstStartedPulling="2025-11-25 15:16:46.529439574 +0000 UTC m=+1439.181581975" lastFinishedPulling="2025-11-25 15:16:51.614882753 +0000 UTC m=+1444.267025164" observedRunningTime="2025-11-25 15:16:52.513996025 +0000 UTC m=+1445.166138446" watchObservedRunningTime="2025-11-25 15:16:52.520434309 +0000 UTC m=+1445.172576720" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.549369 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.494704154 podStartE2EDuration="5.549348849s" podCreationTimestamp="2025-11-25 15:16:47 +0000 UTC" firstStartedPulling="2025-11-25 15:16:48.560505566 +0000 UTC m=+1441.212647977" lastFinishedPulling="2025-11-25 15:16:51.615150261 +0000 UTC m=+1444.267292672" observedRunningTime="2025-11-25 15:16:52.530836128 +0000 UTC m=+1445.182978549" watchObservedRunningTime="2025-11-25 15:16:52.549348849 +0000 UTC m=+1445.201491260" Nov 25 15:16:52 crc kubenswrapper[4806]: I1125 15:16:52.569232 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.126804 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.238414 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.238873 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.239021 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.242731 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.242831 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5b5\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.243024 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.243132 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data\") pod \"cd3549f2-5659-4143-85a4-93b62a3f1834\" (UID: \"cd3549f2-5659-4143-85a4-93b62a3f1834\") " Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.247995 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs" (OuterVolumeSpecName: "logs") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.248873 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs" (OuterVolumeSpecName: "certs") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.253778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5" (OuterVolumeSpecName: "kube-api-access-dk5b5") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "kube-api-access-dk5b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.254096 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts" (OuterVolumeSpecName: "scripts") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.306938 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.306948 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.307749 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data" (OuterVolumeSpecName: "config-data") pod "cd3549f2-5659-4143-85a4-93b62a3f1834" (UID: "cd3549f2-5659-4143-85a4-93b62a3f1834"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.345975 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346340 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346356 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346370 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cd3549f2-5659-4143-85a4-93b62a3f1834-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346405 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346418 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cd3549f2-5659-4143-85a4-93b62a3f1834-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.346429 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5b5\" (UniqueName: \"kubernetes.io/projected/cd3549f2-5659-4143-85a4-93b62a3f1834-kube-api-access-dk5b5\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479455 4806 generic.go:334] "Generic (PLEG): container finished" podID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerID="4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" exitCode=0 Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479495 4806 generic.go:334] "Generic (PLEG): container finished" podID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerID="a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" exitCode=143 Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479552 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479590 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerDied","Data":"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479640 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerDied","Data":"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479659 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"cd3549f2-5659-4143-85a4-93b62a3f1834","Type":"ContainerDied","Data":"78a9f791a2a87ac1f7f4adc8842c724c1e8dbddaaec306b9aec72c4e4b5d5125"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.479679 4806 scope.go:117] "RemoveContainer" containerID="4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.491754 4806 generic.go:334] "Generic (PLEG): container finished" podID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerID="c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad" exitCode=0 Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.491801 4806 generic.go:334] "Generic (PLEG): container finished" podID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerID="dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8" exitCode=2 Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.491809 4806 generic.go:334] "Generic (PLEG): container finished" podID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerID="2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91" exitCode=0 Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.492812 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerDied","Data":"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.492840 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerDied","Data":"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.492850 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerDied","Data":"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91"} Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.518431 4806 scope.go:117] "RemoveContainer" containerID="a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.526028 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.538048 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.550821 4806 scope.go:117] "RemoveContainer" containerID="4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" Nov 25 15:16:53 crc kubenswrapper[4806]: E1125 15:16:53.551585 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee\": container with ID starting with 4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee not found: ID does not exist" containerID="4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.551771 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee"} err="failed to get container status \"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee\": rpc error: code = NotFound desc = could not find container \"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee\": container with ID starting with 4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee not found: ID does not exist" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.551877 4806 scope.go:117] "RemoveContainer" containerID="a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" Nov 25 15:16:53 crc kubenswrapper[4806]: E1125 15:16:53.555158 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c\": container with ID starting with a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c not found: ID does not exist" containerID="a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.555216 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c"} err="failed to get container status \"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c\": rpc error: code = NotFound desc = could not find container \"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c\": container with ID starting with a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c not found: ID does not exist" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.555263 4806 scope.go:117] "RemoveContainer" containerID="4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.556820 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee"} err="failed to get container status \"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee\": rpc error: code = NotFound desc = could not find container \"4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee\": container with ID starting with 4f85c139f51e306816ad2a96a587e6fde78c00ab0dffb50ce4d54590b594f0ee not found: ID does not exist" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.556872 4806 scope.go:117] "RemoveContainer" containerID="a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.559506 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c"} err="failed to get container status \"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c\": rpc error: code = NotFound desc = could not find container \"a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c\": container with ID starting with a2e293dfbef6b04c8ef25ae3cdeeb6a27e349d406c3d792450f6486a500c203c not found: ID does not exist" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.570208 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:53 crc kubenswrapper[4806]: E1125 15:16:53.570661 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api-log" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.570678 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api-log" Nov 25 15:16:53 crc kubenswrapper[4806]: E1125 15:16:53.570729 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.570736 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.570906 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api-log" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.570921 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" containerName="cloudkitty-api" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.571993 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.573991 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.575727 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.576547 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.581743 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.629985 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dd45f"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.631517 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.646462 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dd45f"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.737408 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d57xj"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.738849 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762271 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762493 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762585 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762770 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762907 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwcnv\" (UniqueName: \"kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.762982 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.763055 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.763408 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj8gn\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.763557 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.763657 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.770371 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d57xj"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.818212 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-a493-account-create-cnxrz"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.821415 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.823860 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.831881 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a493-account-create-cnxrz"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865544 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvbw\" (UniqueName: \"kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865698 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwcnv\" (UniqueName: \"kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865736 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865763 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865809 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj8gn\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865859 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865881 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865926 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865953 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865977 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.865999 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.866020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.867163 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.867767 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.870164 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.873771 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.891120 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.892672 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.901878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.906166 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj8gn\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.909055 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwcnv\" (UniqueName: \"kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv\") pod \"nova-api-db-create-dd45f\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.909958 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.913205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs\") pod \"cloudkitty-api-0\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " pod="openstack/cloudkitty-api-0" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.950256 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.957666 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-t9tkg"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.967856 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.970447 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zctw\" (UniqueName: \"kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.970989 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.971393 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvbw\" (UniqueName: \"kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.977153 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.977832 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t9tkg"] Nov 25 15:16:53 crc kubenswrapper[4806]: I1125 15:16:53.978258 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.004924 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvbw\" (UniqueName: \"kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw\") pod \"nova-cell0-db-create-d57xj\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.045372 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e9d6-account-create-f69l5"] Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.047030 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.052686 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.063699 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.071234 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e9d6-account-create-f69l5"] Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.112049 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75gn\" (UniqueName: \"kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.112175 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.112246 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zctw\" (UniqueName: \"kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.118102 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9w97\" (UniqueName: \"kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.118154 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.118182 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.130746 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.162711 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3549f2-5659-4143-85a4-93b62a3f1834" path="/var/lib/kubelet/pods/cd3549f2-5659-4143-85a4-93b62a3f1834/volumes" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.166668 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zctw\" (UniqueName: \"kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw\") pod \"nova-api-a493-account-create-cnxrz\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.197059 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.231788 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9w97\" (UniqueName: \"kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.231860 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.231924 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75gn\" (UniqueName: \"kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.231989 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.233089 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.233172 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.267827 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75gn\" (UniqueName: \"kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn\") pod \"nova-cell1-db-create-t9tkg\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.276792 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9w97\" (UniqueName: \"kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97\") pod \"nova-cell0-e9d6-account-create-f69l5\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.303380 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-26a0-account-create-vlfqj"] Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.305037 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.307352 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.328362 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-26a0-account-create-vlfqj"] Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.334747 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vm4n\" (UniqueName: \"kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.334910 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.414339 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.449967 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.451616 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.451728 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vm4n\" (UniqueName: \"kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.452603 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.466554 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.489330 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vm4n\" (UniqueName: \"kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n\") pod \"nova-cell1-26a0-account-create-vlfqj\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.580965 4806 generic.go:334] "Generic (PLEG): container finished" podID="2a56466e-77fd-43df-b5a6-234d90b66334" containerID="8cf283dc14763b552a799ea513e1f4146ba5c46d2643284e97c7bca12f49f737" exitCode=0 Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.581062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerDied","Data":"8cf283dc14763b552a799ea513e1f4146ba5c46d2643284e97c7bca12f49f737"} Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.588288 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" containerName="cloudkitty-proc" containerID="cri-o://905d71c1d05052a99a6229a1b8e71d25e32171baa3d1c2c50937c40bdfd49a66" gracePeriod=30 Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.638795 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dd45f"] Nov 25 15:16:54 crc kubenswrapper[4806]: W1125 15:16:54.673514 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7b2aa87_f218_472e_a8e8_7fe0eaf3b7cd.slice/crio-349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44 WatchSource:0}: Error finding container 349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44: Status 404 returned error can't find the container with id 349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44 Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.773973 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:54 crc kubenswrapper[4806]: I1125 15:16:54.805752 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.093976 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d57xj"] Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.122292 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.416610 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-t9tkg"] Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.450049 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.492900 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrxwd\" (UniqueName: \"kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494007 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494399 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494512 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494690 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.494873 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.495050 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run\") pod \"2a56466e-77fd-43df-b5a6-234d90b66334\" (UID: \"2a56466e-77fd-43df-b5a6-234d90b66334\") " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.517605 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs" (OuterVolumeSpecName: "logs") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.537652 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd" (OuterVolumeSpecName: "kube-api-access-lrxwd") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "kube-api-access-lrxwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.538754 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.571437 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts" (OuterVolumeSpecName: "scripts") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.642032 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t9tkg" event={"ID":"fd64b415-9694-483d-b17d-aceffd50763a","Type":"ContainerStarted","Data":"cee069a1cb5e4a3144a4fa8c0da0f53c5e39a0a970b7543bab83920b24322c1a"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.652762 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d" (OuterVolumeSpecName: "glance") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.656659 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.656707 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2a56466e-77fd-43df-b5a6-234d90b66334-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.656725 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrxwd\" (UniqueName: \"kubernetes.io/projected/2a56466e-77fd-43df-b5a6-234d90b66334-kube-api-access-lrxwd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.656738 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.658139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d57xj" event={"ID":"325b6686-f8e5-4ba8-b274-7e3508888807","Type":"ContainerStarted","Data":"53cdee4f3ffefb663799ae7bfeebf4f0fe8e3fda614859c4ce0974e6f03d8672"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.665444 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.677990 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.699526 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2a56466e-77fd-43df-b5a6-234d90b66334","Type":"ContainerDied","Data":"cc8c784811b31d841420dbef79f03539a9a3aa70948395363a5e6518654c0fa6"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.699579 4806 scope.go:117] "RemoveContainer" containerID="8cf283dc14763b552a799ea513e1f4146ba5c46d2643284e97c7bca12f49f737" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.699704 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.743858 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerStarted","Data":"38da8d2f66db7400a4866ae9d4134ebc75fcfc3338975c36fef21af4d6b2ebbe"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.758904 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.758943 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") on node \"crc\" " Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.758954 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.766662 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dd45f" event={"ID":"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd","Type":"ContainerStarted","Data":"75b85ef04466dea5f541526dd316e51ce813b304e50c00e16f985adbf61e36a6"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.766696 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dd45f" event={"ID":"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd","Type":"ContainerStarted","Data":"349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44"} Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.838282 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dd45f" podStartSLOduration=2.838261283 podStartE2EDuration="2.838261283s" podCreationTimestamp="2025-11-25 15:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:55.817646482 +0000 UTC m=+1448.469788893" watchObservedRunningTime="2025-11-25 15:16:55.838261283 +0000 UTC m=+1448.490403694" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.859497 4806 scope.go:117] "RemoveContainer" containerID="18234c61d10b2a578b0e7f73ce15bc055485de86ee76ee24627bafff6d25fa84" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.946953 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data" (OuterVolumeSpecName: "config-data") pod "2a56466e-77fd-43df-b5a6-234d90b66334" (UID: "2a56466e-77fd-43df-b5a6-234d90b66334"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:55 crc kubenswrapper[4806]: I1125 15:16:55.955618 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.018806 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a56466e-77fd-43df-b5a6-234d90b66334-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.106621 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.106970 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d") on node "crc" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.121286 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.146275 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e9d6-account-create-f69l5"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.244257 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-a493-account-create-cnxrz"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.355182 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.368967 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.398195 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-26a0-account-create-vlfqj"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.412894 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:56 crc kubenswrapper[4806]: E1125 15:16:56.413332 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-log" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.413345 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-log" Nov 25 15:16:56 crc kubenswrapper[4806]: E1125 15:16:56.413380 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-httpd" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.413386 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-httpd" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.413601 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-httpd" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.413625 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" containerName="glance-log" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.414758 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.418582 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.418895 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.436806 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560139 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-logs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560623 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560777 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7df\" (UniqueName: \"kubernetes.io/projected/314b444d-00a5-4e80-bc69-07ae78a84ad8-kube-api-access-2t7df\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560900 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.560986 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.561071 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.561122 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690573 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690633 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690665 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690728 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-logs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690914 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.690988 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7df\" (UniqueName: \"kubernetes.io/projected/314b444d-00a5-4e80-bc69-07ae78a84ad8-kube-api-access-2t7df\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.714045 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.714105 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8638b1ae13d11aa578ec8268990588ab56d879a16e582695b5a3249a11d12f4b/globalmount\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.718768 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.723252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/314b444d-00a5-4e80-bc69-07ae78a84ad8-logs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.762116 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.765882 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7df\" (UniqueName: \"kubernetes.io/projected/314b444d-00a5-4e80-bc69-07ae78a84ad8-kube-api-access-2t7df\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.766953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.767917 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.778976 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/314b444d-00a5-4e80-bc69-07ae78a84ad8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.801927 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ebe71c6-c22e-4d9d-a3bf-252bde5cd36d\") pod \"glance-default-internal-api-0\" (UID: \"314b444d-00a5-4e80-bc69-07ae78a84ad8\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.829717 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a493-account-create-cnxrz" event={"ID":"7defc7dc-b7b6-4302-82ed-15edce4862b3","Type":"ContainerStarted","Data":"051377af5c7afb8e515f5d22abc58b2ae81c6232ca1b4fb020eeb67c4e2622e2"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.841536 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e9d6-account-create-f69l5" event={"ID":"c6b52df6-253b-4082-8e20-dc729af9ce15","Type":"ContainerStarted","Data":"19e263f512d4b524aeaaaa53e3b9bd3edf504d50a8932faef99049960047aa60"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.847059 4806 generic.go:334] "Generic (PLEG): container finished" podID="fd64b415-9694-483d-b17d-aceffd50763a" containerID="400c011d07c55c1d8a814cdfa3278ffee3ae767ab54f9a17b167816e4ad0a723" exitCode=0 Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.847129 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t9tkg" event={"ID":"fd64b415-9694-483d-b17d-aceffd50763a","Type":"ContainerDied","Data":"400c011d07c55c1d8a814cdfa3278ffee3ae767ab54f9a17b167816e4ad0a723"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.862225 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-26a0-account-create-vlfqj" event={"ID":"4e92cdcb-b78b-47cb-ba65-9167485d9795","Type":"ContainerStarted","Data":"0040f0b481289b67f4c79afb0f4d91ec1d283db001e6700da069c21898404005"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.868869 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.877547 4806 generic.go:334] "Generic (PLEG): container finished" podID="325b6686-f8e5-4ba8-b274-7e3508888807" containerID="2a8de8c8212ee429202474aa901025b9a3f54e94eff6b9698841f94008541e06" exitCode=0 Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.878009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d57xj" event={"ID":"325b6686-f8e5-4ba8-b274-7e3508888807","Type":"ContainerDied","Data":"2a8de8c8212ee429202474aa901025b9a3f54e94eff6b9698841f94008541e06"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.884478 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerStarted","Data":"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c"} Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.905172 4806 generic.go:334] "Generic (PLEG): container finished" podID="c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" containerID="75b85ef04466dea5f541526dd316e51ce813b304e50c00e16f985adbf61e36a6" exitCode=0 Nov 25 15:16:56 crc kubenswrapper[4806]: I1125 15:16:56.905221 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dd45f" event={"ID":"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd","Type":"ContainerDied","Data":"75b85ef04466dea5f541526dd316e51ce813b304e50c00e16f985adbf61e36a6"} Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.669160 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.930256 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.978220 4806 generic.go:334] "Generic (PLEG): container finished" podID="7defc7dc-b7b6-4302-82ed-15edce4862b3" containerID="ba3f217dfe744df9233407d1e8e42525d299c0dbea011265bbc237093d9329af" exitCode=0 Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.978291 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a493-account-create-cnxrz" event={"ID":"7defc7dc-b7b6-4302-82ed-15edce4862b3","Type":"ContainerDied","Data":"ba3f217dfe744df9233407d1e8e42525d299c0dbea011265bbc237093d9329af"} Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.987256 4806 generic.go:334] "Generic (PLEG): container finished" podID="c6b52df6-253b-4082-8e20-dc729af9ce15" containerID="4256cd15c189bc95acd3319070e18e3dfea95b540784c5b81e34178ca2c35ef5" exitCode=0 Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.987355 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e9d6-account-create-f69l5" event={"ID":"c6b52df6-253b-4082-8e20-dc729af9ce15","Type":"ContainerDied","Data":"4256cd15c189bc95acd3319070e18e3dfea95b540784c5b81e34178ca2c35ef5"} Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.996223 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:57 crc kubenswrapper[4806]: I1125 15:16:57.996887 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="dnsmasq-dns" containerID="cri-o://6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565" gracePeriod=10 Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:57.999992 4806 generic.go:334] "Generic (PLEG): container finished" podID="4e92cdcb-b78b-47cb-ba65-9167485d9795" containerID="849013106acde87055135120c927c30c01399c6614e433c6deeeac64f4b10fbb" exitCode=0 Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.000060 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-26a0-account-create-vlfqj" event={"ID":"4e92cdcb-b78b-47cb-ba65-9167485d9795","Type":"ContainerDied","Data":"849013106acde87055135120c927c30c01399c6614e433c6deeeac64f4b10fbb"} Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.003787 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"314b444d-00a5-4e80-bc69-07ae78a84ad8","Type":"ContainerStarted","Data":"c57a2a8b29dcb22e55cef3978cfecc03723b12cc1817153d88f745d42bc65e62"} Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.009239 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerStarted","Data":"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2"} Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.009369 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.111013 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=5.110996168 podStartE2EDuration="5.110996168s" podCreationTimestamp="2025-11-25 15:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:58.102110043 +0000 UTC m=+1450.754252464" watchObservedRunningTime="2025-11-25 15:16:58.110996168 +0000 UTC m=+1450.763138579" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.204803 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a56466e-77fd-43df-b5a6-234d90b66334" path="/var/lib/kubelet/pods/2a56466e-77fd-43df-b5a6-234d90b66334/volumes" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.763918 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.889794 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfvbw\" (UniqueName: \"kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw\") pod \"325b6686-f8e5-4ba8-b274-7e3508888807\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.891093 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "325b6686-f8e5-4ba8-b274-7e3508888807" (UID: "325b6686-f8e5-4ba8-b274-7e3508888807"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.891587 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts\") pod \"325b6686-f8e5-4ba8-b274-7e3508888807\" (UID: \"325b6686-f8e5-4ba8-b274-7e3508888807\") " Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.892942 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325b6686-f8e5-4ba8-b274-7e3508888807-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.900913 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw" (OuterVolumeSpecName: "kube-api-access-kfvbw") pod "325b6686-f8e5-4ba8-b274-7e3508888807" (UID: "325b6686-f8e5-4ba8-b274-7e3508888807"). InnerVolumeSpecName "kube-api-access-kfvbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.915848 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.952621 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:58 crc kubenswrapper[4806]: I1125 15:16:58.967189 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:58.996037 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts\") pod \"fd64b415-9694-483d-b17d-aceffd50763a\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:58.996263 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m75gn\" (UniqueName: \"kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn\") pod \"fd64b415-9694-483d-b17d-aceffd50763a\" (UID: \"fd64b415-9694-483d-b17d-aceffd50763a\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:58.996827 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfvbw\" (UniqueName: \"kubernetes.io/projected/325b6686-f8e5-4ba8-b274-7e3508888807-kube-api-access-kfvbw\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:58.997548 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd64b415-9694-483d-b17d-aceffd50763a" (UID: "fd64b415-9694-483d-b17d-aceffd50763a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.001369 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn" (OuterVolumeSpecName: "kube-api-access-m75gn") pod "fd64b415-9694-483d-b17d-aceffd50763a" (UID: "fd64b415-9694-483d-b17d-aceffd50763a"). InnerVolumeSpecName "kube-api-access-m75gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.058854 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"314b444d-00a5-4e80-bc69-07ae78a84ad8","Type":"ContainerStarted","Data":"5c10d030127cd7e3ff522849473564efa2acbcdc332f2ea1a626abebc7431f12"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.072945 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dd45f" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.072960 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dd45f" event={"ID":"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd","Type":"ContainerDied","Data":"349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.073006 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="349135ceeec8f9aa743f3d5e47aa27d2409cc856ed292ccaf4ad6c093978da44" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.078761 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-t9tkg" event={"ID":"fd64b415-9694-483d-b17d-aceffd50763a","Type":"ContainerDied","Data":"cee069a1cb5e4a3144a4fa8c0da0f53c5e39a0a970b7543bab83920b24322c1a"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.078799 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee069a1cb5e4a3144a4fa8c0da0f53c5e39a0a970b7543bab83920b24322c1a" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.078852 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-t9tkg" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.087037 4806 generic.go:334] "Generic (PLEG): container finished" podID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerID="6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565" exitCode=0 Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.087108 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" event={"ID":"05f719ae-33a1-44c1-9f80-2d7f644e34c2","Type":"ContainerDied","Data":"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.087140 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" event={"ID":"05f719ae-33a1-44c1-9f80-2d7f644e34c2","Type":"ContainerDied","Data":"3443b8ede70dbd5c011bb4b59557d6f2d7b4b10096d23be2d64ef616e02b21ea"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.087163 4806 scope.go:117] "RemoveContainer" containerID="6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.087308 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-lftzq" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.101814 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.101901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts\") pod \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.101948 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfjp\" (UniqueName: \"kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102068 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102093 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102121 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwcnv\" (UniqueName: \"kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv\") pod \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\" (UID: \"c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102147 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102217 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb\") pod \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\" (UID: \"05f719ae-33a1-44c1-9f80-2d7f644e34c2\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102369 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" (UID: "c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102704 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m75gn\" (UniqueName: \"kubernetes.io/projected/fd64b415-9694-483d-b17d-aceffd50763a-kube-api-access-m75gn\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102724 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd64b415-9694-483d-b17d-aceffd50763a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.102733 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.105740 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d57xj" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.110535 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d57xj" event={"ID":"325b6686-f8e5-4ba8-b274-7e3508888807","Type":"ContainerDied","Data":"53cdee4f3ffefb663799ae7bfeebf4f0fe8e3fda614859c4ce0974e6f03d8672"} Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.110589 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53cdee4f3ffefb663799ae7bfeebf4f0fe8e3fda614859c4ce0974e6f03d8672" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.117582 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp" (OuterVolumeSpecName: "kube-api-access-2pfjp") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "kube-api-access-2pfjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.118218 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.118246 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.138805 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv" (OuterVolumeSpecName: "kube-api-access-fwcnv") pod "c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" (UID: "c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd"). InnerVolumeSpecName "kube-api-access-fwcnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.144454 4806 scope.go:117] "RemoveContainer" containerID="6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.198197 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.206363 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfjp\" (UniqueName: \"kubernetes.io/projected/05f719ae-33a1-44c1-9f80-2d7f644e34c2-kube-api-access-2pfjp\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.206416 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwcnv\" (UniqueName: \"kubernetes.io/projected/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd-kube-api-access-fwcnv\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.206429 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.214985 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.216240 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.226066 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config" (OuterVolumeSpecName: "config") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.261428 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.271154 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.281986 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "05f719ae-33a1-44c1-9f80-2d7f644e34c2" (UID: "05f719ae-33a1-44c1-9f80-2d7f644e34c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.308345 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.308371 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.308380 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.308391 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/05f719ae-33a1-44c1-9f80-2d7f644e34c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.376908 4806 scope.go:117] "RemoveContainer" containerID="6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565" Nov 25 15:16:59 crc kubenswrapper[4806]: E1125 15:16:59.377512 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565\": container with ID starting with 6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565 not found: ID does not exist" containerID="6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.377563 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565"} err="failed to get container status \"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565\": rpc error: code = NotFound desc = could not find container \"6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565\": container with ID starting with 6d0662e0bd15acfc8f95073ac820cb456941b29b7161847518addc2ed0124565 not found: ID does not exist" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.377590 4806 scope.go:117] "RemoveContainer" containerID="6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df" Nov 25 15:16:59 crc kubenswrapper[4806]: E1125 15:16:59.378093 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df\": container with ID starting with 6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df not found: ID does not exist" containerID="6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.378117 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df"} err="failed to get container status \"6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df\": rpc error: code = NotFound desc = could not find container \"6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df\": container with ID starting with 6ac8edae7269d34937b5f457e106dcd479a99f1cafb3e1a17ad3365069bb26df not found: ID does not exist" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.454615 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.471497 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-lftzq"] Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.675408 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.816002 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts\") pod \"c6b52df6-253b-4082-8e20-dc729af9ce15\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.816482 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9w97\" (UniqueName: \"kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97\") pod \"c6b52df6-253b-4082-8e20-dc729af9ce15\" (UID: \"c6b52df6-253b-4082-8e20-dc729af9ce15\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.820600 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6b52df6-253b-4082-8e20-dc729af9ce15" (UID: "c6b52df6-253b-4082-8e20-dc729af9ce15"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.828054 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97" (OuterVolumeSpecName: "kube-api-access-l9w97") pod "c6b52df6-253b-4082-8e20-dc729af9ce15" (UID: "c6b52df6-253b-4082-8e20-dc729af9ce15"). InnerVolumeSpecName "kube-api-access-l9w97". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.851284 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.859732 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.918616 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vm4n\" (UniqueName: \"kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n\") pod \"4e92cdcb-b78b-47cb-ba65-9167485d9795\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.918835 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts\") pod \"4e92cdcb-b78b-47cb-ba65-9167485d9795\" (UID: \"4e92cdcb-b78b-47cb-ba65-9167485d9795\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.918901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zctw\" (UniqueName: \"kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw\") pod \"7defc7dc-b7b6-4302-82ed-15edce4862b3\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.919032 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts\") pod \"7defc7dc-b7b6-4302-82ed-15edce4862b3\" (UID: \"7defc7dc-b7b6-4302-82ed-15edce4862b3\") " Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.919553 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6b52df6-253b-4082-8e20-dc729af9ce15-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.919572 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9w97\" (UniqueName: \"kubernetes.io/projected/c6b52df6-253b-4082-8e20-dc729af9ce15-kube-api-access-l9w97\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.919653 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7defc7dc-b7b6-4302-82ed-15edce4862b3" (UID: "7defc7dc-b7b6-4302-82ed-15edce4862b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.919668 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4e92cdcb-b78b-47cb-ba65-9167485d9795" (UID: "4e92cdcb-b78b-47cb-ba65-9167485d9795"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.921808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n" (OuterVolumeSpecName: "kube-api-access-5vm4n") pod "4e92cdcb-b78b-47cb-ba65-9167485d9795" (UID: "4e92cdcb-b78b-47cb-ba65-9167485d9795"). InnerVolumeSpecName "kube-api-access-5vm4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:59 crc kubenswrapper[4806]: I1125 15:16:59.925458 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw" (OuterVolumeSpecName: "kube-api-access-6zctw") pod "7defc7dc-b7b6-4302-82ed-15edce4862b3" (UID: "7defc7dc-b7b6-4302-82ed-15edce4862b3"). InnerVolumeSpecName "kube-api-access-6zctw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.021324 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7defc7dc-b7b6-4302-82ed-15edce4862b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.021369 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vm4n\" (UniqueName: \"kubernetes.io/projected/4e92cdcb-b78b-47cb-ba65-9167485d9795-kube-api-access-5vm4n\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.021383 4806 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4e92cdcb-b78b-47cb-ba65-9167485d9795-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.021393 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zctw\" (UniqueName: \"kubernetes.io/projected/7defc7dc-b7b6-4302-82ed-15edce4862b3-kube-api-access-6zctw\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.100793 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" path="/var/lib/kubelet/pods/05f719ae-33a1-44c1-9f80-2d7f644e34c2/volumes" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.126185 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-a493-account-create-cnxrz" event={"ID":"7defc7dc-b7b6-4302-82ed-15edce4862b3","Type":"ContainerDied","Data":"051377af5c7afb8e515f5d22abc58b2ae81c6232ca1b4fb020eeb67c4e2622e2"} Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.126223 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="051377af5c7afb8e515f5d22abc58b2ae81c6232ca1b4fb020eeb67c4e2622e2" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.126268 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-a493-account-create-cnxrz" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.132275 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e9d6-account-create-f69l5" event={"ID":"c6b52df6-253b-4082-8e20-dc729af9ce15","Type":"ContainerDied","Data":"19e263f512d4b524aeaaaa53e3b9bd3edf504d50a8932faef99049960047aa60"} Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.132382 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19e263f512d4b524aeaaaa53e3b9bd3edf504d50a8932faef99049960047aa60" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.132426 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e9d6-account-create-f69l5" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.133973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-26a0-account-create-vlfqj" event={"ID":"4e92cdcb-b78b-47cb-ba65-9167485d9795","Type":"ContainerDied","Data":"0040f0b481289b67f4c79afb0f4d91ec1d283db001e6700da069c21898404005"} Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.134033 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0040f0b481289b67f4c79afb0f4d91ec1d283db001e6700da069c21898404005" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.134102 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-26a0-account-create-vlfqj" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.143174 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"314b444d-00a5-4e80-bc69-07ae78a84ad8","Type":"ContainerStarted","Data":"2a2eac0ad2415f99533ea80eb64a2cef94ac1effc5b10af789b2609b9f6da221"} Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.143213 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.143922 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.183020 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.182994485 podStartE2EDuration="4.182994485s" podCreationTimestamp="2025-11-25 15:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:00.165609956 +0000 UTC m=+1452.817752387" watchObservedRunningTime="2025-11-25 15:17:00.182994485 +0000 UTC m=+1452.835136906" Nov 25 15:17:00 crc kubenswrapper[4806]: I1125 15:17:00.938910 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:17:00 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:17:00 crc kubenswrapper[4806]: > Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.004369 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5546966469-bclkx" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.086552 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.086786 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777b956f44-6v6r5" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-api" containerID="cri-o://1cedb05810f06eea5884c14673600d408d6e60bc9da95e0848407dc26166bd52" gracePeriod=30 Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.086895 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-777b956f44-6v6r5" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-httpd" containerID="cri-o://a942ad3f505747fa608ab453fe618393954bc7f8eef61b1a305b5ef9d5505032" gracePeriod=30 Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.753141 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858567 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858768 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858873 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858908 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858932 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.858954 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.859010 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsf7m\" (UniqueName: \"kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m\") pod \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\" (UID: \"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4\") " Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.859720 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.859869 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.874828 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m" (OuterVolumeSpecName: "kube-api-access-nsf7m") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "kube-api-access-nsf7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.874924 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts" (OuterVolumeSpecName: "scripts") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.909498 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.961379 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.961413 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.961425 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsf7m\" (UniqueName: \"kubernetes.io/projected/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-kube-api-access-nsf7m\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.961434 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.961442 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.977925 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:01 crc kubenswrapper[4806]: I1125 15:17:01.988731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data" (OuterVolumeSpecName: "config-data") pod "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" (UID: "15a33776-cbde-4c55-9a3f-dc2e2cbd7de4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.063511 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.063542 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.168238 4806 generic.go:334] "Generic (PLEG): container finished" podID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerID="67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1" exitCode=0 Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.168370 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.168367 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerDied","Data":"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1"} Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.168436 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15a33776-cbde-4c55-9a3f-dc2e2cbd7de4","Type":"ContainerDied","Data":"4325d635abde4019d1c07cec8d2275ada327a20e2ccee07e83ad7c6f57900749"} Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.168457 4806 scope.go:117] "RemoveContainer" containerID="c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.174005 4806 generic.go:334] "Generic (PLEG): container finished" podID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerID="a942ad3f505747fa608ab453fe618393954bc7f8eef61b1a305b5ef9d5505032" exitCode=0 Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.174134 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.174153 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.174123 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerDied","Data":"a942ad3f505747fa608ab453fe618393954bc7f8eef61b1a305b5ef9d5505032"} Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.195508 4806 scope.go:117] "RemoveContainer" containerID="dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.216136 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.241168 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.244208 4806 scope.go:117] "RemoveContainer" containerID="2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255160 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255790 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd64b415-9694-483d-b17d-aceffd50763a" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255813 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd64b415-9694-483d-b17d-aceffd50763a" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255828 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7defc7dc-b7b6-4302-82ed-15edce4862b3" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255837 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7defc7dc-b7b6-4302-82ed-15edce4862b3" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255851 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="sg-core" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255859 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="sg-core" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255876 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-central-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255883 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-central-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255894 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-notification-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255902 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-notification-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255917 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6b52df6-253b-4082-8e20-dc729af9ce15" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255926 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6b52df6-253b-4082-8e20-dc729af9ce15" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255945 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325b6686-f8e5-4ba8-b274-7e3508888807" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255954 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="325b6686-f8e5-4ba8-b274-7e3508888807" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255968 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e92cdcb-b78b-47cb-ba65-9167485d9795" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255976 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e92cdcb-b78b-47cb-ba65-9167485d9795" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.255989 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="init" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.255997 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="init" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.256029 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="dnsmasq-dns" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256036 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="dnsmasq-dns" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.256050 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="proxy-httpd" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256057 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="proxy-httpd" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.256077 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256085 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256310 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256363 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7defc7dc-b7b6-4302-82ed-15edce4862b3" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256384 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd64b415-9694-483d-b17d-aceffd50763a" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256395 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-notification-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256410 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f719ae-33a1-44c1-9f80-2d7f644e34c2" containerName="dnsmasq-dns" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256421 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e92cdcb-b78b-47cb-ba65-9167485d9795" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256435 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6b52df6-253b-4082-8e20-dc729af9ce15" containerName="mariadb-account-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256448 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="ceilometer-central-agent" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256461 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="sg-core" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256473 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="325b6686-f8e5-4ba8-b274-7e3508888807" containerName="mariadb-database-create" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.256481 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" containerName="proxy-httpd" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.258961 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.262228 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.262861 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269572 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269648 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmwc\" (UniqueName: \"kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269709 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269774 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269806 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269830 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.269871 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.272760 4806 scope.go:117] "RemoveContainer" containerID="67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.273345 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372052 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cmwc\" (UniqueName: \"kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372147 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372203 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372226 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372248 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.372531 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.373049 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.376290 4806 scope.go:117] "RemoveContainer" containerID="c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.378531 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad\": container with ID starting with c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad not found: ID does not exist" containerID="c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.378584 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad"} err="failed to get container status \"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad\": rpc error: code = NotFound desc = could not find container \"c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad\": container with ID starting with c24cb537bae22f4fdf6eb0488cba3c907629150ede187e72c22858eac7ed18ad not found: ID does not exist" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.378619 4806 scope.go:117] "RemoveContainer" containerID="dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.378984 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.380177 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8\": container with ID starting with dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8 not found: ID does not exist" containerID="dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.380214 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8"} err="failed to get container status \"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8\": rpc error: code = NotFound desc = could not find container \"dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8\": container with ID starting with dbd1f9a6a26587712585a1410ea494a9edf03ddb006b63afdb9a1cbeec299eb8 not found: ID does not exist" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.380235 4806 scope.go:117] "RemoveContainer" containerID="2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.380868 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.380926 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.381775 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91\": container with ID starting with 2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91 not found: ID does not exist" containerID="2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.381825 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91"} err="failed to get container status \"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91\": rpc error: code = NotFound desc = could not find container \"2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91\": container with ID starting with 2469ac20e56f8ef2adda679c4d8ddc364bd8176d492c0c4228a2bf475688de91 not found: ID does not exist" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.381844 4806 scope.go:117] "RemoveContainer" containerID="67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1" Nov 25 15:17:02 crc kubenswrapper[4806]: E1125 15:17:02.383694 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1\": container with ID starting with 67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1 not found: ID does not exist" containerID="67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.383744 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1"} err="failed to get container status \"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1\": rpc error: code = NotFound desc = could not find container \"67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1\": container with ID starting with 67688ea7aca6f8e7fab45c4aa700a6ec400aad324b872777f1d0a8e3dbba19d1 not found: ID does not exist" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.384883 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.384984 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.398305 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cmwc\" (UniqueName: \"kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc\") pod \"ceilometer-0\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.655534 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.758625 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:17:02 crc kubenswrapper[4806]: I1125 15:17:02.763120 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:17:03 crc kubenswrapper[4806]: I1125 15:17:03.209361 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.100046 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15a33776-cbde-4c55-9a3f-dc2e2cbd7de4" path="/var/lib/kubelet/pods/15a33776-cbde-4c55-9a3f-dc2e2cbd7de4/volumes" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.199858 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerStarted","Data":"175dd9ca506eb5a8a7053b317776b9ba12de89595744e7dcc16e1f4f9b0a9bdc"} Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.199916 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerStarted","Data":"e69047e3f3efd8d51e6bf575b1a00fc404cf5991f9cd29c91c610ee2bdf46ed5"} Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.340019 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jknk"] Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.341359 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.357857 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.358515 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.368461 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jknk"] Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.376913 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7rs57" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.422448 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.422509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.422572 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzqf7\" (UniqueName: \"kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.422644 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.524581 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.524645 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.524709 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzqf7\" (UniqueName: \"kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.524786 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.534015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.556084 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.560878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzqf7\" (UniqueName: \"kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.568154 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-9jknk\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:04 crc kubenswrapper[4806]: I1125 15:17:04.662633 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:05 crc kubenswrapper[4806]: I1125 15:17:05.175782 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jknk"] Nov 25 15:17:05 crc kubenswrapper[4806]: W1125 15:17:05.183268 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod077d373d_365d_4520_8345_d6b636d212fd.slice/crio-9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e WatchSource:0}: Error finding container 9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e: Status 404 returned error can't find the container with id 9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e Nov 25 15:17:05 crc kubenswrapper[4806]: I1125 15:17:05.210406 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jknk" event={"ID":"077d373d-365d-4520-8345-d6b636d212fd","Type":"ContainerStarted","Data":"9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e"} Nov 25 15:17:05 crc kubenswrapper[4806]: I1125 15:17:05.212681 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerStarted","Data":"a127d692fa16610269e50bc9b71456b5891274589e0cffab7666c704ed85eb60"} Nov 25 15:17:06 crc kubenswrapper[4806]: I1125 15:17:06.234612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerStarted","Data":"8e8ac47fa73342842df6bc905fba02b18ea11efda569503147167e69d30d35dd"} Nov 25 15:17:06 crc kubenswrapper[4806]: I1125 15:17:06.869932 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:06 crc kubenswrapper[4806]: I1125 15:17:06.870034 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:06 crc kubenswrapper[4806]: I1125 15:17:06.911675 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:06 crc kubenswrapper[4806]: I1125 15:17:06.935459 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:07 crc kubenswrapper[4806]: I1125 15:17:07.258961 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerStarted","Data":"4772e7a089afc65d61249994264da7eec42e4af4ad2811ad5677051d4fe87aa0"} Nov 25 15:17:07 crc kubenswrapper[4806]: I1125 15:17:07.259072 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:07 crc kubenswrapper[4806]: I1125 15:17:07.259481 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:07 crc kubenswrapper[4806]: I1125 15:17:07.259600 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:17:07 crc kubenswrapper[4806]: I1125 15:17:07.283856 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.64097595 podStartE2EDuration="5.283833167s" podCreationTimestamp="2025-11-25 15:17:02 +0000 UTC" firstStartedPulling="2025-11-25 15:17:03.22718749 +0000 UTC m=+1455.879329901" lastFinishedPulling="2025-11-25 15:17:06.870044707 +0000 UTC m=+1459.522187118" observedRunningTime="2025-11-25 15:17:07.276820836 +0000 UTC m=+1459.928963257" watchObservedRunningTime="2025-11-25 15:17:07.283833167 +0000 UTC m=+1459.935975578" Nov 25 15:17:09 crc kubenswrapper[4806]: I1125 15:17:09.542170 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:09 crc kubenswrapper[4806]: I1125 15:17:09.542846 4806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:17:09 crc kubenswrapper[4806]: I1125 15:17:09.546776 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.464265 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.464508 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-central-agent" containerID="cri-o://175dd9ca506eb5a8a7053b317776b9ba12de89595744e7dcc16e1f4f9b0a9bdc" gracePeriod=30 Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.464629 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="proxy-httpd" containerID="cri-o://4772e7a089afc65d61249994264da7eec42e4af4ad2811ad5677051d4fe87aa0" gracePeriod=30 Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.464669 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="sg-core" containerID="cri-o://8e8ac47fa73342842df6bc905fba02b18ea11efda569503147167e69d30d35dd" gracePeriod=30 Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.464701 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-notification-agent" containerID="cri-o://a127d692fa16610269e50bc9b71456b5891274589e0cffab7666c704ed85eb60" gracePeriod=30 Nov 25 15:17:10 crc kubenswrapper[4806]: I1125 15:17:10.953742 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:17:10 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:17:10 crc kubenswrapper[4806]: > Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322196 4806 generic.go:334] "Generic (PLEG): container finished" podID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerID="4772e7a089afc65d61249994264da7eec42e4af4ad2811ad5677051d4fe87aa0" exitCode=0 Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322233 4806 generic.go:334] "Generic (PLEG): container finished" podID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerID="8e8ac47fa73342842df6bc905fba02b18ea11efda569503147167e69d30d35dd" exitCode=2 Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322243 4806 generic.go:334] "Generic (PLEG): container finished" podID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerID="a127d692fa16610269e50bc9b71456b5891274589e0cffab7666c704ed85eb60" exitCode=0 Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322253 4806 generic.go:334] "Generic (PLEG): container finished" podID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerID="175dd9ca506eb5a8a7053b317776b9ba12de89595744e7dcc16e1f4f9b0a9bdc" exitCode=0 Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322287 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerDied","Data":"4772e7a089afc65d61249994264da7eec42e4af4ad2811ad5677051d4fe87aa0"} Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322347 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerDied","Data":"8e8ac47fa73342842df6bc905fba02b18ea11efda569503147167e69d30d35dd"} Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322357 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerDied","Data":"a127d692fa16610269e50bc9b71456b5891274589e0cffab7666c704ed85eb60"} Nov 25 15:17:11 crc kubenswrapper[4806]: I1125 15:17:11.322369 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerDied","Data":"175dd9ca506eb5a8a7053b317776b9ba12de89595744e7dcc16e1f4f9b0a9bdc"} Nov 25 15:17:13 crc kubenswrapper[4806]: I1125 15:17:13.345925 4806 generic.go:334] "Generic (PLEG): container finished" podID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerID="1cedb05810f06eea5884c14673600d408d6e60bc9da95e0848407dc26166bd52" exitCode=0 Nov 25 15:17:13 crc kubenswrapper[4806]: I1125 15:17:13.346029 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerDied","Data":"1cedb05810f06eea5884c14673600d408d6e60bc9da95e0848407dc26166bd52"} Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.170662 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.179617 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279338 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config\") pod \"23ba80fd-113a-4a97-bca6-2348a1aa4917\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279421 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279456 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279508 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle\") pod \"23ba80fd-113a-4a97-bca6-2348a1aa4917\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279574 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cmwc\" (UniqueName: \"kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.279834 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280091 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b8t9\" (UniqueName: \"kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9\") pod \"23ba80fd-113a-4a97-bca6-2348a1aa4917\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280238 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs\") pod \"23ba80fd-113a-4a97-bca6-2348a1aa4917\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280308 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280445 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280482 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config\") pod \"23ba80fd-113a-4a97-bca6-2348a1aa4917\" (UID: \"23ba80fd-113a-4a97-bca6-2348a1aa4917\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280563 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.280602 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd\") pod \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\" (UID: \"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f\") " Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.281240 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.282147 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.282170 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.286418 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc" (OuterVolumeSpecName: "kube-api-access-2cmwc") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "kube-api-access-2cmwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.287486 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9" (OuterVolumeSpecName: "kube-api-access-8b8t9") pod "23ba80fd-113a-4a97-bca6-2348a1aa4917" (UID: "23ba80fd-113a-4a97-bca6-2348a1aa4917"). InnerVolumeSpecName "kube-api-access-8b8t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.287648 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts" (OuterVolumeSpecName: "scripts") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.288477 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "23ba80fd-113a-4a97-bca6-2348a1aa4917" (UID: "23ba80fd-113a-4a97-bca6-2348a1aa4917"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.316878 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.337435 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23ba80fd-113a-4a97-bca6-2348a1aa4917" (UID: "23ba80fd-113a-4a97-bca6-2348a1aa4917"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.342920 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config" (OuterVolumeSpecName: "config") pod "23ba80fd-113a-4a97-bca6-2348a1aa4917" (UID: "23ba80fd-113a-4a97-bca6-2348a1aa4917"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.374525 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f","Type":"ContainerDied","Data":"e69047e3f3efd8d51e6bf575b1a00fc404cf5991f9cd29c91c610ee2bdf46ed5"} Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.374618 4806 scope.go:117] "RemoveContainer" containerID="4772e7a089afc65d61249994264da7eec42e4af4ad2811ad5677051d4fe87aa0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.374724 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.379001 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-777b956f44-6v6r5" event={"ID":"23ba80fd-113a-4a97-bca6-2348a1aa4917","Type":"ContainerDied","Data":"a3c178a3c5961b4ed247877b789c7a2716482c1cec3bf4c2039a1e60de34eb1e"} Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.379063 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-777b956f44-6v6r5" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.380214 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384804 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384846 4806 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384859 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384872 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384886 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384896 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384907 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cmwc\" (UniqueName: \"kubernetes.io/projected/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-kube-api-access-2cmwc\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.384922 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b8t9\" (UniqueName: \"kubernetes.io/projected/23ba80fd-113a-4a97-bca6-2348a1aa4917-kube-api-access-8b8t9\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.396561 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "23ba80fd-113a-4a97-bca6-2348a1aa4917" (UID: "23ba80fd-113a-4a97-bca6-2348a1aa4917"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.405916 4806 scope.go:117] "RemoveContainer" containerID="8e8ac47fa73342842df6bc905fba02b18ea11efda569503147167e69d30d35dd" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.413482 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data" (OuterVolumeSpecName: "config-data") pod "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" (UID: "dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.432642 4806 scope.go:117] "RemoveContainer" containerID="a127d692fa16610269e50bc9b71456b5891274589e0cffab7666c704ed85eb60" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.465327 4806 scope.go:117] "RemoveContainer" containerID="175dd9ca506eb5a8a7053b317776b9ba12de89595744e7dcc16e1f4f9b0a9bdc" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.487074 4806 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23ba80fd-113a-4a97-bca6-2348a1aa4917-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.487108 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.708202 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.717135 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.726240 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.734816 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-777b956f44-6v6r5"] Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.746288 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.746766 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-central-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.746961 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-central-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.747001 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-api" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747011 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-api" Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.747031 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="sg-core" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747039 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="sg-core" Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.747049 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="proxy-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747058 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="proxy-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.747079 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747087 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: E1125 15:17:15.747106 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-notification-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747113 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-notification-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747368 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-api" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747385 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" containerName="neutron-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747399 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-notification-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747411 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="proxy-httpd" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747437 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="sg-core" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.747451 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" containerName="ceilometer-central-agent" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.749501 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.751843 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.752494 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.759820 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.792863 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.792914 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.792969 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.793083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.793125 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.793178 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4bm\" (UniqueName: \"kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.793287 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.895492 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.896057 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.896205 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.896602 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.896860 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.897067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.897308 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.897097 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.898145 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4bm\" (UniqueName: \"kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.901629 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.901708 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.902335 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.902545 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:15 crc kubenswrapper[4806]: I1125 15:17:15.922666 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4bm\" (UniqueName: \"kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm\") pod \"ceilometer-0\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " pod="openstack/ceilometer-0" Nov 25 15:17:16 crc kubenswrapper[4806]: I1125 15:17:16.069709 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:16 crc kubenswrapper[4806]: I1125 15:17:16.102272 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ba80fd-113a-4a97-bca6-2348a1aa4917" path="/var/lib/kubelet/pods/23ba80fd-113a-4a97-bca6-2348a1aa4917/volumes" Nov 25 15:17:16 crc kubenswrapper[4806]: I1125 15:17:16.102957 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f" path="/var/lib/kubelet/pods/dd4c56d0-5775-4c3f-ab6d-55043bfd9d0f/volumes" Nov 25 15:17:16 crc kubenswrapper[4806]: I1125 15:17:16.823851 4806 scope.go:117] "RemoveContainer" containerID="a942ad3f505747fa608ab453fe618393954bc7f8eef61b1a305b5ef9d5505032" Nov 25 15:17:16 crc kubenswrapper[4806]: I1125 15:17:16.940346 4806 scope.go:117] "RemoveContainer" containerID="1cedb05810f06eea5884c14673600d408d6e60bc9da95e0848407dc26166bd52" Nov 25 15:17:17 crc kubenswrapper[4806]: I1125 15:17:17.255663 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:17 crc kubenswrapper[4806]: I1125 15:17:17.260647 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="359539be-7a7d-48d3-8738-83765f897fa4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:17 crc kubenswrapper[4806]: W1125 15:17:17.444461 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411ed211_dc78_448f_8088_0822409b2a9f.slice/crio-8192650e341a460ace7dccd09a174c14b77fc97802f54bcbb1388b64e6cf5e1b WatchSource:0}: Error finding container 8192650e341a460ace7dccd09a174c14b77fc97802f54bcbb1388b64e6cf5e1b: Status 404 returned error can't find the container with id 8192650e341a460ace7dccd09a174c14b77fc97802f54bcbb1388b64e6cf5e1b Nov 25 15:17:17 crc kubenswrapper[4806]: I1125 15:17:17.446502 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:18 crc kubenswrapper[4806]: I1125 15:17:18.427345 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerStarted","Data":"8192650e341a460ace7dccd09a174c14b77fc97802f54bcbb1388b64e6cf5e1b"} Nov 25 15:17:18 crc kubenswrapper[4806]: I1125 15:17:18.429102 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jknk" event={"ID":"077d373d-365d-4520-8345-d6b636d212fd","Type":"ContainerStarted","Data":"db04c7ca2ad0df7c98b812b1531ef2caeaa1884ea73fc8e07fc98d3c06e0e5d0"} Nov 25 15:17:18 crc kubenswrapper[4806]: I1125 15:17:18.951944 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:17:18 crc kubenswrapper[4806]: I1125 15:17:18.952282 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:17:19 crc kubenswrapper[4806]: I1125 15:17:19.472562 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-9jknk" podStartSLOduration=3.707643954 podStartE2EDuration="15.472540368s" podCreationTimestamp="2025-11-25 15:17:04 +0000 UTC" firstStartedPulling="2025-11-25 15:17:05.184879407 +0000 UTC m=+1457.837021818" lastFinishedPulling="2025-11-25 15:17:16.949775821 +0000 UTC m=+1469.601918232" observedRunningTime="2025-11-25 15:17:19.460779001 +0000 UTC m=+1472.112921412" watchObservedRunningTime="2025-11-25 15:17:19.472540368 +0000 UTC m=+1472.124682789" Nov 25 15:17:20 crc kubenswrapper[4806]: I1125 15:17:20.449360 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerStarted","Data":"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b"} Nov 25 15:17:20 crc kubenswrapper[4806]: I1125 15:17:20.937394 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-77qk4" podUID="19d636cf-e82d-48c3-82db-321f0505c5ab" containerName="registry-server" probeResult="failure" output=< Nov 25 15:17:20 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:17:20 crc kubenswrapper[4806]: > Nov 25 15:17:25 crc kubenswrapper[4806]: I1125 15:17:25.499075 4806 generic.go:334] "Generic (PLEG): container finished" podID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" containerID="905d71c1d05052a99a6229a1b8e71d25e32171baa3d1c2c50937c40bdfd49a66" exitCode=137 Nov 25 15:17:25 crc kubenswrapper[4806]: I1125 15:17:25.499266 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"3de7f512-f839-4abf-9ffa-e7d70ba8eac2","Type":"ContainerDied","Data":"905d71c1d05052a99a6229a1b8e71d25e32171baa3d1c2c50937c40bdfd49a66"} Nov 25 15:17:26 crc kubenswrapper[4806]: I1125 15:17:26.511506 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerStarted","Data":"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9"} Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.180944 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.279045 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.279453 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.279609 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q68r6\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.279687 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.280976 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.281124 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts\") pod \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\" (UID: \"3de7f512-f839-4abf-9ffa-e7d70ba8eac2\") " Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.287091 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6" (OuterVolumeSpecName: "kube-api-access-q68r6") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "kube-api-access-q68r6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.288638 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.291664 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs" (OuterVolumeSpecName: "certs") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.301499 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts" (OuterVolumeSpecName: "scripts") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.315544 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.327534 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data" (OuterVolumeSpecName: "config-data") pod "3de7f512-f839-4abf-9ffa-e7d70ba8eac2" (UID: "3de7f512-f839-4abf-9ffa-e7d70ba8eac2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384514 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384553 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384561 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384570 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384581 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q68r6\" (UniqueName: \"kubernetes.io/projected/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-kube-api-access-q68r6\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.384589 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3de7f512-f839-4abf-9ffa-e7d70ba8eac2-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.525275 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"3de7f512-f839-4abf-9ffa-e7d70ba8eac2","Type":"ContainerDied","Data":"db8148e96c4d359180db0f393a711a4c4ab5e0aab3783b783561542a346a5db6"} Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.525352 4806 scope.go:117] "RemoveContainer" containerID="905d71c1d05052a99a6229a1b8e71d25e32171baa3d1c2c50937c40bdfd49a66" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.525372 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.571969 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.583689 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.601174 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:17:27 crc kubenswrapper[4806]: E1125 15:17:27.601709 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" containerName="cloudkitty-proc" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.601734 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" containerName="cloudkitty-proc" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.602033 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" containerName="cloudkitty-proc" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.603162 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.607545 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.616418 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794192 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794275 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794318 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794411 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794430 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.794452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dwnr\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897267 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897379 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897432 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897492 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897516 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.897547 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dwnr\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.902405 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.903552 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.904797 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.904955 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.918399 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.924124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dwnr\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr\") pod \"cloudkitty-proc-0\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:17:27 crc kubenswrapper[4806]: I1125 15:17:27.932960 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:17:28 crc kubenswrapper[4806]: I1125 15:17:28.105684 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de7f512-f839-4abf-9ffa-e7d70ba8eac2" path="/var/lib/kubelet/pods/3de7f512-f839-4abf-9ffa-e7d70ba8eac2/volumes" Nov 25 15:17:28 crc kubenswrapper[4806]: I1125 15:17:28.410739 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:17:28 crc kubenswrapper[4806]: W1125 15:17:28.417041 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3d1e2e_c63c_4c46_828b_189248646880.slice/crio-955a95f955b77f47e437680d1fed8efd98528d13f3c2d84ed08334017b2c8620 WatchSource:0}: Error finding container 955a95f955b77f47e437680d1fed8efd98528d13f3c2d84ed08334017b2c8620: Status 404 returned error can't find the container with id 955a95f955b77f47e437680d1fed8efd98528d13f3c2d84ed08334017b2c8620 Nov 25 15:17:28 crc kubenswrapper[4806]: I1125 15:17:28.539398 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"7f3d1e2e-c63c-4c46-828b-189248646880","Type":"ContainerStarted","Data":"955a95f955b77f47e437680d1fed8efd98528d13f3c2d84ed08334017b2c8620"} Nov 25 15:17:29 crc kubenswrapper[4806]: I1125 15:17:29.551980 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"7f3d1e2e-c63c-4c46-828b-189248646880","Type":"ContainerStarted","Data":"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30"} Nov 25 15:17:29 crc kubenswrapper[4806]: I1125 15:17:29.577794 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.577768934 podStartE2EDuration="2.577768934s" podCreationTimestamp="2025-11-25 15:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:29.573028538 +0000 UTC m=+1482.225170949" watchObservedRunningTime="2025-11-25 15:17:29.577768934 +0000 UTC m=+1482.229911345" Nov 25 15:17:29 crc kubenswrapper[4806]: I1125 15:17:29.945380 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.012808 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-77qk4" Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.166156 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-77qk4"] Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.192667 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.192960 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fzfmm" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="registry-server" containerID="cri-o://798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" gracePeriod=2 Nov 25 15:17:30 crc kubenswrapper[4806]: E1125 15:17:30.334604 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18 is running failed: container process not found" containerID="798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:17:30 crc kubenswrapper[4806]: E1125 15:17:30.339610 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18 is running failed: container process not found" containerID="798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:17:30 crc kubenswrapper[4806]: E1125 15:17:30.341298 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18 is running failed: container process not found" containerID="798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:17:30 crc kubenswrapper[4806]: E1125 15:17:30.341385 4806 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-fzfmm" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="registry-server" Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.602255 4806 generic.go:334] "Generic (PLEG): container finished" podID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerID="798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" exitCode=0 Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.602378 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerDied","Data":"798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18"} Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.610395 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerStarted","Data":"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa"} Nov 25 15:17:30 crc kubenswrapper[4806]: I1125 15:17:30.927264 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.004608 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smtwl\" (UniqueName: \"kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl\") pod \"1ffc6ef5-d449-49bf-a92d-094be80c3999\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.004964 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities\") pod \"1ffc6ef5-d449-49bf-a92d-094be80c3999\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.005069 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content\") pod \"1ffc6ef5-d449-49bf-a92d-094be80c3999\" (UID: \"1ffc6ef5-d449-49bf-a92d-094be80c3999\") " Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.005929 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities" (OuterVolumeSpecName: "utilities") pod "1ffc6ef5-d449-49bf-a92d-094be80c3999" (UID: "1ffc6ef5-d449-49bf-a92d-094be80c3999"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.021656 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl" (OuterVolumeSpecName: "kube-api-access-smtwl") pod "1ffc6ef5-d449-49bf-a92d-094be80c3999" (UID: "1ffc6ef5-d449-49bf-a92d-094be80c3999"). InnerVolumeSpecName "kube-api-access-smtwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.107690 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.107744 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smtwl\" (UniqueName: \"kubernetes.io/projected/1ffc6ef5-d449-49bf-a92d-094be80c3999-kube-api-access-smtwl\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.146830 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ffc6ef5-d449-49bf-a92d-094be80c3999" (UID: "1ffc6ef5-d449-49bf-a92d-094be80c3999"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.209824 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ffc6ef5-d449-49bf-a92d-094be80c3999-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.623386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerStarted","Data":"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9"} Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.623519 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.626566 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fzfmm" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.626985 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fzfmm" event={"ID":"1ffc6ef5-d449-49bf-a92d-094be80c3999","Type":"ContainerDied","Data":"7f92b765ae52658ae8e12218d63d50dd7302d67f7e93ae26dde20b573c95a126"} Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.627037 4806 scope.go:117] "RemoveContainer" containerID="798122962be8d4d1273575fc7176c4f813944f52092032a39a82179578b10f18" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.652213 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.794889815 podStartE2EDuration="16.65219182s" podCreationTimestamp="2025-11-25 15:17:15 +0000 UTC" firstStartedPulling="2025-11-25 15:17:17.447728886 +0000 UTC m=+1470.099871297" lastFinishedPulling="2025-11-25 15:17:31.305030891 +0000 UTC m=+1483.957173302" observedRunningTime="2025-11-25 15:17:31.645807927 +0000 UTC m=+1484.297950348" watchObservedRunningTime="2025-11-25 15:17:31.65219182 +0000 UTC m=+1484.304334241" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.655543 4806 scope.go:117] "RemoveContainer" containerID="fed56d81993f0c874c40e4a2d8e0c87209587ed0100524fea753bc3b66aacc6d" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.686857 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.695605 4806 scope.go:117] "RemoveContainer" containerID="b75bd279d671b9c780849b278e146db9c374d1b4735ec03dd44f86d93b13b172" Nov 25 15:17:31 crc kubenswrapper[4806]: I1125 15:17:31.709277 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fzfmm"] Nov 25 15:17:32 crc kubenswrapper[4806]: I1125 15:17:32.111723 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" path="/var/lib/kubelet/pods/1ffc6ef5-d449-49bf-a92d-094be80c3999/volumes" Nov 25 15:17:34 crc kubenswrapper[4806]: I1125 15:17:34.215545 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cloudkitty-api-0" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.193:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:34 crc kubenswrapper[4806]: I1125 15:17:34.215605 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.193:8889/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:36 crc kubenswrapper[4806]: I1125 15:17:36.329705 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.444794 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.445683 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-central-agent" containerID="cri-o://e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b" gracePeriod=30 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.445726 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="sg-core" containerID="cri-o://3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa" gracePeriod=30 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.445767 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-notification-agent" containerID="cri-o://85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9" gracePeriod=30 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.445812 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="proxy-httpd" containerID="cri-o://a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9" gracePeriod=30 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.733189 4806 generic.go:334] "Generic (PLEG): container finished" podID="411ed211-dc78-448f-8088-0822409b2a9f" containerID="a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9" exitCode=0 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.733474 4806 generic.go:334] "Generic (PLEG): container finished" podID="411ed211-dc78-448f-8088-0822409b2a9f" containerID="3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa" exitCode=2 Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.733267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerDied","Data":"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9"} Nov 25 15:17:38 crc kubenswrapper[4806]: I1125 15:17:38.733511 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerDied","Data":"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa"} Nov 25 15:17:39 crc kubenswrapper[4806]: I1125 15:17:39.749267 4806 generic.go:334] "Generic (PLEG): container finished" podID="411ed211-dc78-448f-8088-0822409b2a9f" containerID="85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9" exitCode=0 Nov 25 15:17:39 crc kubenswrapper[4806]: I1125 15:17:39.749356 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerDied","Data":"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9"} Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.438043 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530023 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qh4bm\" (UniqueName: \"kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530084 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530115 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530166 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530210 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530271 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530377 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd\") pod \"411ed211-dc78-448f-8088-0822409b2a9f\" (UID: \"411ed211-dc78-448f-8088-0822409b2a9f\") " Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530598 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.530983 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.531350 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.540526 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm" (OuterVolumeSpecName: "kube-api-access-qh4bm") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "kube-api-access-qh4bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.552615 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts" (OuterVolumeSpecName: "scripts") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.579655 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.633690 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.633758 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/411ed211-dc78-448f-8088-0822409b2a9f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.633771 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qh4bm\" (UniqueName: \"kubernetes.io/projected/411ed211-dc78-448f-8088-0822409b2a9f-kube-api-access-qh4bm\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.633784 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.643173 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.650891 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data" (OuterVolumeSpecName: "config-data") pod "411ed211-dc78-448f-8088-0822409b2a9f" (UID: "411ed211-dc78-448f-8088-0822409b2a9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.735822 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.735873 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411ed211-dc78-448f-8088-0822409b2a9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.810917 4806 generic.go:334] "Generic (PLEG): container finished" podID="411ed211-dc78-448f-8088-0822409b2a9f" containerID="e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b" exitCode=0 Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.810973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerDied","Data":"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b"} Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.811021 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.811056 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"411ed211-dc78-448f-8088-0822409b2a9f","Type":"ContainerDied","Data":"8192650e341a460ace7dccd09a174c14b77fc97802f54bcbb1388b64e6cf5e1b"} Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.811081 4806 scope.go:117] "RemoveContainer" containerID="a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.837712 4806 scope.go:117] "RemoveContainer" containerID="3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.862085 4806 scope.go:117] "RemoveContainer" containerID="85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.866486 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.876269 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.896795 4806 scope.go:117] "RemoveContainer" containerID="e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.896836 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897310 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="sg-core" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897722 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="sg-core" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897741 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-central-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897748 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-central-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897777 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="extract-utilities" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897786 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="extract-utilities" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897793 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="registry-server" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897799 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="registry-server" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897811 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-notification-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897817 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-notification-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.897829 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="extract-content" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.897834 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="extract-content" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.900455 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="proxy-httpd" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900476 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="proxy-httpd" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900821 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="sg-core" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900867 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-central-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900885 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ffc6ef5-d449-49bf-a92d-094be80c3999" containerName="registry-server" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900893 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="proxy-httpd" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.900910 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="411ed211-dc78-448f-8088-0822409b2a9f" containerName="ceilometer-notification-agent" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.902995 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.908310 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.908684 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.939077 4806 scope.go:117] "RemoveContainer" containerID="a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.941352 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9\": container with ID starting with a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9 not found: ID does not exist" containerID="a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.941399 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9"} err="failed to get container status \"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9\": rpc error: code = NotFound desc = could not find container \"a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9\": container with ID starting with a5925e260613febff201f82f663ba114cdf32a1878eb0a3cbceefacbfabe99e9 not found: ID does not exist" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.941431 4806 scope.go:117] "RemoveContainer" containerID="3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.942844 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa\": container with ID starting with 3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa not found: ID does not exist" containerID="3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.942893 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa"} err="failed to get container status \"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa\": rpc error: code = NotFound desc = could not find container \"3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa\": container with ID starting with 3929444d1b8333e848af9e94fe588245f4aa729c7f6d6f6e1193848213e011aa not found: ID does not exist" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.942932 4806 scope.go:117] "RemoveContainer" containerID="85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.943355 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9\": container with ID starting with 85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9 not found: ID does not exist" containerID="85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.943385 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9"} err="failed to get container status \"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9\": rpc error: code = NotFound desc = could not find container \"85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9\": container with ID starting with 85de57fc172bc306686c8b4114787d3023e4287b54f0f961edebb1d03e08d1f9 not found: ID does not exist" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.943403 4806 scope.go:117] "RemoveContainer" containerID="e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b" Nov 25 15:17:43 crc kubenswrapper[4806]: E1125 15:17:43.943668 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b\": container with ID starting with e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b not found: ID does not exist" containerID="e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.943690 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b"} err="failed to get container status \"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b\": rpc error: code = NotFound desc = could not find container \"e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b\": container with ID starting with e0ce3269037591e10d6869e6e44817b68dbbc987f97d29b2f8eb6228e3d4a90b not found: ID does not exist" Nov 25 15:17:43 crc kubenswrapper[4806]: I1125 15:17:43.944566 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043115 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043168 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043472 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043551 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043642 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpfsd\" (UniqueName: \"kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043751 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.043811 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.104031 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411ed211-dc78-448f-8088-0822409b2a9f" path="/var/lib/kubelet/pods/411ed211-dc78-448f-8088-0822409b2a9f/volumes" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.145958 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146010 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146043 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpfsd\" (UniqueName: \"kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146081 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146127 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146170 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.146190 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.147810 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.148536 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.150204 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.153651 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.153792 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.154457 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.177042 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpfsd\" (UniqueName: \"kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd\") pod \"ceilometer-0\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.239738 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.716261 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:44 crc kubenswrapper[4806]: W1125 15:17:44.719420 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa1fb8ba_bc56_42e7_8efa_3caf37784c8f.slice/crio-6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae WatchSource:0}: Error finding container 6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae: Status 404 returned error can't find the container with id 6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae Nov 25 15:17:44 crc kubenswrapper[4806]: I1125 15:17:44.821926 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerStarted","Data":"6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae"} Nov 25 15:17:45 crc kubenswrapper[4806]: I1125 15:17:45.835332 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerStarted","Data":"7af674775fbcc2a8d57d7adae882c91b14c9ef52b330d8f387ff61b1380c8913"} Nov 25 15:17:46 crc kubenswrapper[4806]: I1125 15:17:46.850335 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerStarted","Data":"e6406ff971d1adca3fd15dec5d6a15c57838e96fca8cd1db81f956eadce857ce"} Nov 25 15:17:47 crc kubenswrapper[4806]: I1125 15:17:47.860657 4806 generic.go:334] "Generic (PLEG): container finished" podID="077d373d-365d-4520-8345-d6b636d212fd" containerID="db04c7ca2ad0df7c98b812b1531ef2caeaa1884ea73fc8e07fc98d3c06e0e5d0" exitCode=0 Nov 25 15:17:47 crc kubenswrapper[4806]: I1125 15:17:47.860786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jknk" event={"ID":"077d373d-365d-4520-8345-d6b636d212fd","Type":"ContainerDied","Data":"db04c7ca2ad0df7c98b812b1531ef2caeaa1884ea73fc8e07fc98d3c06e0e5d0"} Nov 25 15:17:47 crc kubenswrapper[4806]: I1125 15:17:47.863974 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerStarted","Data":"0a587abb354d154ccd1c7be46a4a958ef36828c6702d65f3f2275091ace9f013"} Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.877283 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerStarted","Data":"fa7d6923be1a003c17b1865ed6b9c51c49958cbfad7ac5311061052305d8557b"} Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.918240 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.416127118 podStartE2EDuration="5.918132795s" podCreationTimestamp="2025-11-25 15:17:43 +0000 UTC" firstStartedPulling="2025-11-25 15:17:44.723085607 +0000 UTC m=+1497.375228008" lastFinishedPulling="2025-11-25 15:17:48.225091274 +0000 UTC m=+1500.877233685" observedRunningTime="2025-11-25 15:17:48.915616223 +0000 UTC m=+1501.567758664" watchObservedRunningTime="2025-11-25 15:17:48.918132795 +0000 UTC m=+1501.570275206" Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.934737 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.934808 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.934880 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.937435 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:17:48 crc kubenswrapper[4806]: I1125 15:17:48.937560 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1" gracePeriod=600 Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.371071 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.456037 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data\") pod \"077d373d-365d-4520-8345-d6b636d212fd\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.456094 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts\") pod \"077d373d-365d-4520-8345-d6b636d212fd\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.456359 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqf7\" (UniqueName: \"kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7\") pod \"077d373d-365d-4520-8345-d6b636d212fd\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.456489 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle\") pod \"077d373d-365d-4520-8345-d6b636d212fd\" (UID: \"077d373d-365d-4520-8345-d6b636d212fd\") " Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.474575 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts" (OuterVolumeSpecName: "scripts") pod "077d373d-365d-4520-8345-d6b636d212fd" (UID: "077d373d-365d-4520-8345-d6b636d212fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.474793 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7" (OuterVolumeSpecName: "kube-api-access-dzqf7") pod "077d373d-365d-4520-8345-d6b636d212fd" (UID: "077d373d-365d-4520-8345-d6b636d212fd"). InnerVolumeSpecName "kube-api-access-dzqf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.499258 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data" (OuterVolumeSpecName: "config-data") pod "077d373d-365d-4520-8345-d6b636d212fd" (UID: "077d373d-365d-4520-8345-d6b636d212fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.505712 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "077d373d-365d-4520-8345-d6b636d212fd" (UID: "077d373d-365d-4520-8345-d6b636d212fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.558859 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzqf7\" (UniqueName: \"kubernetes.io/projected/077d373d-365d-4520-8345-d6b636d212fd-kube-api-access-dzqf7\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.558891 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.558901 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.558909 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/077d373d-365d-4520-8345-d6b636d212fd-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.900777 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-9jknk" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.900770 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-9jknk" event={"ID":"077d373d-365d-4520-8345-d6b636d212fd","Type":"ContainerDied","Data":"9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e"} Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.900852 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d628cf24379b3c291eff8ac72a0a4bb6d1ce15fc9ebf3c1d049edc59f58c27e" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.914087 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1" exitCode=0 Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.915756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1"} Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.915829 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.915846 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12"} Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.915863 4806 scope.go:117] "RemoveContainer" containerID="75eea6826a6ffacea752085907b10e49f430f92ba1940f02d0b4f30e4a305fc4" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.984803 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:17:49 crc kubenswrapper[4806]: E1125 15:17:49.985615 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077d373d-365d-4520-8345-d6b636d212fd" containerName="nova-cell0-conductor-db-sync" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.985640 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="077d373d-365d-4520-8345-d6b636d212fd" containerName="nova-cell0-conductor-db-sync" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.985869 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="077d373d-365d-4520-8345-d6b636d212fd" containerName="nova-cell0-conductor-db-sync" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.986630 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.989212 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7rs57" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.990358 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 15:17:49 crc kubenswrapper[4806]: I1125 15:17:49.997092 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.078623 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.078721 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.078800 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92shc\" (UniqueName: \"kubernetes.io/projected/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-kube-api-access-92shc\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.180731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.180911 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92shc\" (UniqueName: \"kubernetes.io/projected/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-kube-api-access-92shc\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.181062 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.185798 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.192046 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.198698 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92shc\" (UniqueName: \"kubernetes.io/projected/2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b-kube-api-access-92shc\") pod \"nova-cell0-conductor-0\" (UID: \"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.312375 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.811634 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:17:50 crc kubenswrapper[4806]: W1125 15:17:50.812769 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e27c6b8_d0b8_43a7_a3ee_2f3703315a7b.slice/crio-d3e2abe1c77000d3cedf762b92a034779c7b517b72e03e71bb2bfde15692cc4e WatchSource:0}: Error finding container d3e2abe1c77000d3cedf762b92a034779c7b517b72e03e71bb2bfde15692cc4e: Status 404 returned error can't find the container with id d3e2abe1c77000d3cedf762b92a034779c7b517b72e03e71bb2bfde15692cc4e Nov 25 15:17:50 crc kubenswrapper[4806]: I1125 15:17:50.930868 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b","Type":"ContainerStarted","Data":"d3e2abe1c77000d3cedf762b92a034779c7b517b72e03e71bb2bfde15692cc4e"} Nov 25 15:17:51 crc kubenswrapper[4806]: I1125 15:17:51.955446 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b","Type":"ContainerStarted","Data":"0fa92dfd079327606f5ddc211641d12abcd2fc0ccb73504ad8e29224b4aa478f"} Nov 25 15:17:51 crc kubenswrapper[4806]: I1125 15:17:51.955777 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 15:17:51 crc kubenswrapper[4806]: I1125 15:17:51.981610 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.981587212 podStartE2EDuration="2.981587212s" podCreationTimestamp="2025-11-25 15:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:51.971362279 +0000 UTC m=+1504.623504690" watchObservedRunningTime="2025-11-25 15:17:51.981587212 +0000 UTC m=+1504.633729633" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.347025 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.807499 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-rccfb"] Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.808975 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.817930 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.818158 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.820743 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rccfb"] Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.907964 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.908047 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.908173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tglj9\" (UniqueName: \"kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:00 crc kubenswrapper[4806]: I1125 15:18:00.908351 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.009841 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.009935 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.010014 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tglj9\" (UniqueName: \"kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.010133 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.017298 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.017521 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.029936 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.061075 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tglj9\" (UniqueName: \"kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9\") pod \"nova-cell0-cell-mapping-rccfb\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.130026 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.148629 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.154855 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.162592 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.203698 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.205518 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.208511 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.235938 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.254913 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322024 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322102 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpscf\" (UniqueName: \"kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322130 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322155 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322217 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322253 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.322292 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76ccs\" (UniqueName: \"kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.335882 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.337793 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.342662 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.361851 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.364209 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.374624 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.377166 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.418446 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.424773 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.424848 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpscf\" (UniqueName: \"kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.424874 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.424901 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.424970 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.425002 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.425041 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76ccs\" (UniqueName: \"kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.428650 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.432281 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.437685 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.440384 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.440792 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.459312 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.474898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.479245 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.480447 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76ccs\" (UniqueName: \"kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs\") pod \"nova-cell1-novncproxy-0\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.485104 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpscf\" (UniqueName: \"kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf\") pod \"nova-api-0\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.527114 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.528500 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fnrs\" (UniqueName: \"kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.528539 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf2d8\" (UniqueName: \"kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.528626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.528661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.528824 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.529208 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.608800 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.630911 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.630963 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631024 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631073 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631221 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631252 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631356 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631415 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631442 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx2jj\" (UniqueName: \"kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631480 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631517 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fnrs\" (UniqueName: \"kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.631543 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf2d8\" (UniqueName: \"kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.634508 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.635565 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.635956 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.638551 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.639993 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.640011 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.656909 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf2d8\" (UniqueName: \"kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8\") pod \"nova-scheduler-0\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.663428 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fnrs\" (UniqueName: \"kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs\") pod \"nova-metadata-0\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.667999 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.706565 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733435 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx2jj\" (UniqueName: \"kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733493 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733545 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733568 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733648 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.733731 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.734964 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.735025 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.735177 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.735180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.735245 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.767597 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx2jj\" (UniqueName: \"kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj\") pod \"dnsmasq-dns-78cd565959-hcqg2\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.810596 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:01 crc kubenswrapper[4806]: I1125 15:18:01.881019 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-rccfb"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.076669 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9lkf4"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.082992 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.086583 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.086737 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.133615 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rccfb" event={"ID":"9aabac61-808c-46a6-9cc1-e021cb244241","Type":"ContainerStarted","Data":"432da7efeff0ef107d7468993c8c919f8dc7ec307357bc0304acdcb356f71d2d"} Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.133659 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9lkf4"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.261734 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.262083 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.262380 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2rz8\" (UniqueName: \"kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.262455 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.272822 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.367765 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2rz8\" (UniqueName: \"kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.367873 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.368064 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.368144 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.374731 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.381275 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.387936 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.391010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2rz8\" (UniqueName: \"kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8\") pod \"nova-cell1-conductor-db-sync-9lkf4\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.437138 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.494072 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:02 crc kubenswrapper[4806]: W1125 15:18:02.515118 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36d70a3c_4782_4b4a_a8da_89cfff59cf41.slice/crio-20728b6653f7ff7ac15f5dd72d35890d361cf0c506529564b9c1fdc977d5ffe8 WatchSource:0}: Error finding container 20728b6653f7ff7ac15f5dd72d35890d361cf0c506529564b9c1fdc977d5ffe8: Status 404 returned error can't find the container with id 20728b6653f7ff7ac15f5dd72d35890d361cf0c506529564b9c1fdc977d5ffe8 Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.515706 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.823201 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:02 crc kubenswrapper[4806]: I1125 15:18:02.874760 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.192013 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9lkf4"] Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.212629 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rccfb" event={"ID":"9aabac61-808c-46a6-9cc1-e021cb244241","Type":"ContainerStarted","Data":"8c5c302b90e501f8da855eb275d7729b3f90dedee5d5951e19c86fdc61b99866"} Nov 25 15:18:03 crc kubenswrapper[4806]: W1125 15:18:03.214760 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff83e435_76c3_4d0e_8887_a3c5fc1ea65c.slice/crio-ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4 WatchSource:0}: Error finding container ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4: Status 404 returned error can't find the container with id ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4 Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.218714 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerStarted","Data":"20728b6653f7ff7ac15f5dd72d35890d361cf0c506529564b9c1fdc977d5ffe8"} Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.228239 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-rccfb" podStartSLOduration=3.228222449 podStartE2EDuration="3.228222449s" podCreationTimestamp="2025-11-25 15:18:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:03.225269665 +0000 UTC m=+1515.877412076" watchObservedRunningTime="2025-11-25 15:18:03.228222449 +0000 UTC m=+1515.880364870" Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.235179 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"827f0f62-0f25-4c2c-9b0b-b0233cecc48e","Type":"ContainerStarted","Data":"f22b8f11f6bafd7b837620a76ff76da752e4bd8b694af58b96b2b79e9c94b929"} Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.239422 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a","Type":"ContainerStarted","Data":"f795836b3f7e2e1d5cac31ad4c6b9173db9176dae49e0d3c89fb4f770b398cc7"} Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.248678 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" event={"ID":"e5ee1a03-d818-4e64-84d4-a742cbb51c50","Type":"ContainerStarted","Data":"eada171e1e48479ef9ff931798b75a10fab9d79680d414d894fe025687b542ad"} Nov 25 15:18:03 crc kubenswrapper[4806]: I1125 15:18:03.255126 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerStarted","Data":"8807d939d090201eaca5732b3ea337780c4aa5f67c54df90e38e6c2f32016afe"} Nov 25 15:18:04 crc kubenswrapper[4806]: I1125 15:18:04.278815 4806 generic.go:334] "Generic (PLEG): container finished" podID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerID="682701d0db2b15f949f19751787840443b9e053f3f775f2ee00da94f8bb493f2" exitCode=0 Nov 25 15:18:04 crc kubenswrapper[4806]: I1125 15:18:04.278926 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" event={"ID":"e5ee1a03-d818-4e64-84d4-a742cbb51c50","Type":"ContainerDied","Data":"682701d0db2b15f949f19751787840443b9e053f3f775f2ee00da94f8bb493f2"} Nov 25 15:18:04 crc kubenswrapper[4806]: I1125 15:18:04.289532 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" event={"ID":"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c","Type":"ContainerStarted","Data":"3581d6c2acccaef0de95a7122d2e608df6ae0a81a11ff2636f1f7ce978937ac9"} Nov 25 15:18:04 crc kubenswrapper[4806]: I1125 15:18:04.289640 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" event={"ID":"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c","Type":"ContainerStarted","Data":"ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4"} Nov 25 15:18:04 crc kubenswrapper[4806]: I1125 15:18:04.332778 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" podStartSLOduration=2.331545039 podStartE2EDuration="2.331545039s" podCreationTimestamp="2025-11-25 15:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:04.320895294 +0000 UTC m=+1516.973037715" watchObservedRunningTime="2025-11-25 15:18:04.331545039 +0000 UTC m=+1516.983687450" Nov 25 15:18:05 crc kubenswrapper[4806]: I1125 15:18:05.058020 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:05 crc kubenswrapper[4806]: I1125 15:18:05.073062 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.322932 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerStarted","Data":"f0432f1aad9274a36760c8e88ade17e9aa79449723fb51c4959722204db12ad4"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.323624 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerStarted","Data":"2679f0098c99135e253faefe284b114a25ed628af8971f7f93b3f803f4c2fcc1"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.323033 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-log" containerID="cri-o://2679f0098c99135e253faefe284b114a25ed628af8971f7f93b3f803f4c2fcc1" gracePeriod=30 Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.323718 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-metadata" containerID="cri-o://f0432f1aad9274a36760c8e88ade17e9aa79449723fb51c4959722204db12ad4" gracePeriod=30 Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.330280 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"827f0f62-0f25-4c2c-9b0b-b0233cecc48e","Type":"ContainerStarted","Data":"b947ddc8cce612c9e97dd1a056538aa65cc81fbdcf53f5d04d73fecc46802437"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.330445 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b947ddc8cce612c9e97dd1a056538aa65cc81fbdcf53f5d04d73fecc46802437" gracePeriod=30 Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.333947 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a","Type":"ContainerStarted","Data":"dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.336761 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" event={"ID":"e5ee1a03-d818-4e64-84d4-a742cbb51c50","Type":"ContainerStarted","Data":"7c185c509fb62faef23709ffdf315342020b04abe733d2d1b719d898488b3973"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.336919 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.339010 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerStarted","Data":"8021863560c5f7e246a3ee2feada5957325641a9273d253c13971fdb8fbde77e"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.339047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerStarted","Data":"c6a491398ed58a718bf86110a018b8e86cdd981d687f524d47de02431d23fd7f"} Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.359415 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.317730027 podStartE2EDuration="6.359397695s" podCreationTimestamp="2025-11-25 15:18:01 +0000 UTC" firstStartedPulling="2025-11-25 15:18:02.519143319 +0000 UTC m=+1515.171285730" lastFinishedPulling="2025-11-25 15:18:06.560810977 +0000 UTC m=+1519.212953398" observedRunningTime="2025-11-25 15:18:07.353712722 +0000 UTC m=+1520.005855153" watchObservedRunningTime="2025-11-25 15:18:07.359397695 +0000 UTC m=+1520.011540106" Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.384159 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.097757737 podStartE2EDuration="6.384141915s" podCreationTimestamp="2025-11-25 15:18:01 +0000 UTC" firstStartedPulling="2025-11-25 15:18:02.274293295 +0000 UTC m=+1514.926435706" lastFinishedPulling="2025-11-25 15:18:06.560677473 +0000 UTC m=+1519.212819884" observedRunningTime="2025-11-25 15:18:07.377409452 +0000 UTC m=+1520.029551863" watchObservedRunningTime="2025-11-25 15:18:07.384141915 +0000 UTC m=+1520.036284326" Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.406817 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.358188968 podStartE2EDuration="6.406795775s" podCreationTimestamp="2025-11-25 15:18:01 +0000 UTC" firstStartedPulling="2025-11-25 15:18:02.505698723 +0000 UTC m=+1515.157841134" lastFinishedPulling="2025-11-25 15:18:06.55430552 +0000 UTC m=+1519.206447941" observedRunningTime="2025-11-25 15:18:07.395965974 +0000 UTC m=+1520.048108385" watchObservedRunningTime="2025-11-25 15:18:07.406795775 +0000 UTC m=+1520.058938186" Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.427249 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" podStartSLOduration=6.427234241 podStartE2EDuration="6.427234241s" podCreationTimestamp="2025-11-25 15:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:07.42510296 +0000 UTC m=+1520.077245381" watchObservedRunningTime="2025-11-25 15:18:07.427234241 +0000 UTC m=+1520.079376642" Nov 25 15:18:07 crc kubenswrapper[4806]: I1125 15:18:07.456964 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.744766956 podStartE2EDuration="6.456941443s" podCreationTimestamp="2025-11-25 15:18:01 +0000 UTC" firstStartedPulling="2025-11-25 15:18:02.848524537 +0000 UTC m=+1515.500666948" lastFinishedPulling="2025-11-25 15:18:06.560699024 +0000 UTC m=+1519.212841435" observedRunningTime="2025-11-25 15:18:07.447689468 +0000 UTC m=+1520.099831889" watchObservedRunningTime="2025-11-25 15:18:07.456941443 +0000 UTC m=+1520.109083854" Nov 25 15:18:08 crc kubenswrapper[4806]: I1125 15:18:08.352028 4806 generic.go:334] "Generic (PLEG): container finished" podID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerID="2679f0098c99135e253faefe284b114a25ed628af8971f7f93b3f803f4c2fcc1" exitCode=143 Nov 25 15:18:08 crc kubenswrapper[4806]: I1125 15:18:08.353221 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerDied","Data":"2679f0098c99135e253faefe284b114a25ed628af8971f7f93b3f803f4c2fcc1"} Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.609057 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.610912 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.637280 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.668266 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.668355 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.707369 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.707430 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:18:11 crc kubenswrapper[4806]: I1125 15:18:11.746217 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:18:12 crc kubenswrapper[4806]: I1125 15:18:12.411080 4806 generic.go:334] "Generic (PLEG): container finished" podID="9aabac61-808c-46a6-9cc1-e021cb244241" containerID="8c5c302b90e501f8da855eb275d7729b3f90dedee5d5951e19c86fdc61b99866" exitCode=0 Nov 25 15:18:12 crc kubenswrapper[4806]: I1125 15:18:12.411183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rccfb" event={"ID":"9aabac61-808c-46a6-9cc1-e021cb244241","Type":"ContainerDied","Data":"8c5c302b90e501f8da855eb275d7729b3f90dedee5d5951e19c86fdc61b99866"} Nov 25 15:18:12 crc kubenswrapper[4806]: I1125 15:18:12.446549 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:18:12 crc kubenswrapper[4806]: I1125 15:18:12.691707 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:12 crc kubenswrapper[4806]: I1125 15:18:12.691728 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:13 crc kubenswrapper[4806]: I1125 15:18:13.425522 4806 generic.go:334] "Generic (PLEG): container finished" podID="ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" containerID="3581d6c2acccaef0de95a7122d2e608df6ae0a81a11ff2636f1f7ce978937ac9" exitCode=0 Nov 25 15:18:13 crc kubenswrapper[4806]: I1125 15:18:13.427139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" event={"ID":"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c","Type":"ContainerDied","Data":"3581d6c2acccaef0de95a7122d2e608df6ae0a81a11ff2636f1f7ce978937ac9"} Nov 25 15:18:13 crc kubenswrapper[4806]: I1125 15:18:13.906070 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.079221 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data\") pod \"9aabac61-808c-46a6-9cc1-e021cb244241\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.079641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle\") pod \"9aabac61-808c-46a6-9cc1-e021cb244241\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.079895 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tglj9\" (UniqueName: \"kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9\") pod \"9aabac61-808c-46a6-9cc1-e021cb244241\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.080034 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts\") pod \"9aabac61-808c-46a6-9cc1-e021cb244241\" (UID: \"9aabac61-808c-46a6-9cc1-e021cb244241\") " Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.090347 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts" (OuterVolumeSpecName: "scripts") pod "9aabac61-808c-46a6-9cc1-e021cb244241" (UID: "9aabac61-808c-46a6-9cc1-e021cb244241"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.094859 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9" (OuterVolumeSpecName: "kube-api-access-tglj9") pod "9aabac61-808c-46a6-9cc1-e021cb244241" (UID: "9aabac61-808c-46a6-9cc1-e021cb244241"). InnerVolumeSpecName "kube-api-access-tglj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.134569 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data" (OuterVolumeSpecName: "config-data") pod "9aabac61-808c-46a6-9cc1-e021cb244241" (UID: "9aabac61-808c-46a6-9cc1-e021cb244241"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.160870 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aabac61-808c-46a6-9cc1-e021cb244241" (UID: "9aabac61-808c-46a6-9cc1-e021cb244241"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.184151 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tglj9\" (UniqueName: \"kubernetes.io/projected/9aabac61-808c-46a6-9cc1-e021cb244241-kube-api-access-tglj9\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.184189 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.184203 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.184214 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aabac61-808c-46a6-9cc1-e021cb244241-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.262507 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.436560 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-rccfb" event={"ID":"9aabac61-808c-46a6-9cc1-e021cb244241","Type":"ContainerDied","Data":"432da7efeff0ef107d7468993c8c919f8dc7ec307357bc0304acdcb356f71d2d"} Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.436620 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432da7efeff0ef107d7468993c8c919f8dc7ec307357bc0304acdcb356f71d2d" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.436575 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-rccfb" Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.689495 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.689741 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-log" containerID="cri-o://c6a491398ed58a718bf86110a018b8e86cdd981d687f524d47de02431d23fd7f" gracePeriod=30 Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.689914 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-api" containerID="cri-o://8021863560c5f7e246a3ee2feada5957325641a9273d253c13971fdb8fbde77e" gracePeriod=30 Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.767662 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:14 crc kubenswrapper[4806]: I1125 15:18:14.767950 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerName="nova-scheduler-scheduler" containerID="cri-o://dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" gracePeriod=30 Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.063203 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.227171 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts\") pod \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.227554 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle\") pod \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.227741 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data\") pod \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.227853 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2rz8\" (UniqueName: \"kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8\") pod \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\" (UID: \"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c\") " Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.235661 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8" (OuterVolumeSpecName: "kube-api-access-p2rz8") pod "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" (UID: "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c"). InnerVolumeSpecName "kube-api-access-p2rz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.257557 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts" (OuterVolumeSpecName: "scripts") pod "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" (UID: "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.262779 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data" (OuterVolumeSpecName: "config-data") pod "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" (UID: "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.283730 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" (UID: "ff83e435-76c3-4d0e-8887-a3c5fc1ea65c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.330136 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.330456 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.330521 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2rz8\" (UniqueName: \"kubernetes.io/projected/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-kube-api-access-p2rz8\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.330586 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.447705 4806 generic.go:334] "Generic (PLEG): container finished" podID="991017af-a60a-4e0b-97ea-be0e196b6742" containerID="c6a491398ed58a718bf86110a018b8e86cdd981d687f524d47de02431d23fd7f" exitCode=143 Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.447803 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerDied","Data":"c6a491398ed58a718bf86110a018b8e86cdd981d687f524d47de02431d23fd7f"} Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.449887 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" event={"ID":"ff83e435-76c3-4d0e-8887-a3c5fc1ea65c","Type":"ContainerDied","Data":"ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4"} Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.450015 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7f381b0ab8d083a6948f5a9927e8229498347e3866eedc7225beec284d67d4" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.449947 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-9lkf4" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.537191 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:18:15 crc kubenswrapper[4806]: E1125 15:18:15.537717 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" containerName="nova-cell1-conductor-db-sync" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.537737 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" containerName="nova-cell1-conductor-db-sync" Nov 25 15:18:15 crc kubenswrapper[4806]: E1125 15:18:15.537759 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aabac61-808c-46a6-9cc1-e021cb244241" containerName="nova-manage" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.537768 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aabac61-808c-46a6-9cc1-e021cb244241" containerName="nova-manage" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.538018 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aabac61-808c-46a6-9cc1-e021cb244241" containerName="nova-manage" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.538045 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" containerName="nova-cell1-conductor-db-sync" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.539034 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.542018 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.572030 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.635863 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kb8s\" (UniqueName: \"kubernetes.io/projected/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-kube-api-access-4kb8s\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.635937 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.636117 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.737987 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kb8s\" (UniqueName: \"kubernetes.io/projected/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-kube-api-access-4kb8s\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.738059 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.738197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.745065 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.748261 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.757086 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kb8s\" (UniqueName: \"kubernetes.io/projected/d3f3eddf-31e1-4923-b0e1-1245f37ea5b8-kube-api-access-4kb8s\") pod \"nova-cell1-conductor-0\" (UID: \"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:15 crc kubenswrapper[4806]: I1125 15:18:15.879589 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:16 crc kubenswrapper[4806]: I1125 15:18:16.434125 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:18:16 crc kubenswrapper[4806]: W1125 15:18:16.436943 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3f3eddf_31e1_4923_b0e1_1245f37ea5b8.slice/crio-0f914277fb4f99f517ab960c7cc78e6cf208bdcec258372095b646f735dd8146 WatchSource:0}: Error finding container 0f914277fb4f99f517ab960c7cc78e6cf208bdcec258372095b646f735dd8146: Status 404 returned error can't find the container with id 0f914277fb4f99f517ab960c7cc78e6cf208bdcec258372095b646f735dd8146 Nov 25 15:18:16 crc kubenswrapper[4806]: I1125 15:18:16.477261 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8","Type":"ContainerStarted","Data":"0f914277fb4f99f517ab960c7cc78e6cf208bdcec258372095b646f735dd8146"} Nov 25 15:18:16 crc kubenswrapper[4806]: E1125 15:18:16.710280 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:18:16 crc kubenswrapper[4806]: E1125 15:18:16.712714 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:18:16 crc kubenswrapper[4806]: E1125 15:18:16.714391 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:18:16 crc kubenswrapper[4806]: E1125 15:18:16.714451 4806 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerName="nova-scheduler-scheduler" Nov 25 15:18:16 crc kubenswrapper[4806]: I1125 15:18:16.812527 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:16 crc kubenswrapper[4806]: I1125 15:18:16.910736 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:18:16 crc kubenswrapper[4806]: I1125 15:18:16.910993 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="dnsmasq-dns" containerID="cri-o://ca8e378614cc08a95018368575692ddc2ba62111432d44f7b9c5877545aecdc3" gracePeriod=10 Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.500296 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"d3f3eddf-31e1-4923-b0e1-1245f37ea5b8","Type":"ContainerStarted","Data":"4aeb9e7f4287b8d352bfb709b2af71a4723537a1ee2e2e023b46b9f00dc0c1c0"} Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.500586 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.512859 4806 generic.go:334] "Generic (PLEG): container finished" podID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerID="ca8e378614cc08a95018368575692ddc2ba62111432d44f7b9c5877545aecdc3" exitCode=0 Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.512902 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" event={"ID":"79229967-32d3-4ca1-ac03-ab3364d41ca5","Type":"ContainerDied","Data":"ca8e378614cc08a95018368575692ddc2ba62111432d44f7b9c5877545aecdc3"} Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.524599 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.524582589 podStartE2EDuration="2.524582589s" podCreationTimestamp="2025-11-25 15:18:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:17.523153688 +0000 UTC m=+1530.175296109" watchObservedRunningTime="2025-11-25 15:18:17.524582589 +0000 UTC m=+1530.176725000" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.669701 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786383 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786511 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786590 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnpjk\" (UniqueName: \"kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786614 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786752 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.786784 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.794533 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk" (OuterVolumeSpecName: "kube-api-access-wnpjk") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "kube-api-access-wnpjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.852179 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config" (OuterVolumeSpecName: "config") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.865739 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.883936 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.889951 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.890083 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") pod \"79229967-32d3-4ca1-ac03-ab3364d41ca5\" (UID: \"79229967-32d3-4ca1-ac03-ab3364d41ca5\") " Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.891097 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.891121 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnpjk\" (UniqueName: \"kubernetes.io/projected/79229967-32d3-4ca1-ac03-ab3364d41ca5-kube-api-access-wnpjk\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.891135 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.891148 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:17 crc kubenswrapper[4806]: W1125 15:18:17.891242 4806 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/79229967-32d3-4ca1-ac03-ab3364d41ca5/volumes/kubernetes.io~configmap/dns-swift-storage-0 Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.891256 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.899963 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "79229967-32d3-4ca1-ac03-ab3364d41ca5" (UID: "79229967-32d3-4ca1-ac03-ab3364d41ca5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.993710 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:17 crc kubenswrapper[4806]: I1125 15:18:17.993915 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/79229967-32d3-4ca1-ac03-ab3364d41ca5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.528101 4806 generic.go:334] "Generic (PLEG): container finished" podID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerID="dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" exitCode=0 Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.528187 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a","Type":"ContainerDied","Data":"dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492"} Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.547117 4806 generic.go:334] "Generic (PLEG): container finished" podID="991017af-a60a-4e0b-97ea-be0e196b6742" containerID="8021863560c5f7e246a3ee2feada5957325641a9273d253c13971fdb8fbde77e" exitCode=0 Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.547250 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerDied","Data":"8021863560c5f7e246a3ee2feada5957325641a9273d253c13971fdb8fbde77e"} Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.561940 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" event={"ID":"79229967-32d3-4ca1-ac03-ab3364d41ca5","Type":"ContainerDied","Data":"ff8a3d43c1a143f19b1e7db2b37fba4821051ad393783f6b6adbb30865ec9f78"} Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.562010 4806 scope.go:117] "RemoveContainer" containerID="ca8e378614cc08a95018368575692ddc2ba62111432d44f7b9c5877545aecdc3" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.562041 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-l8khz" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.624460 4806 scope.go:117] "RemoveContainer" containerID="2583c4457fb0ae133d74533fb9aaae6df4529fce6670924887b15e4734050088" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.641012 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.675951 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-l8khz"] Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.751033 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.816627 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpscf\" (UniqueName: \"kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf\") pod \"991017af-a60a-4e0b-97ea-be0e196b6742\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.816696 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs\") pod \"991017af-a60a-4e0b-97ea-be0e196b6742\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.816746 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle\") pod \"991017af-a60a-4e0b-97ea-be0e196b6742\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.816786 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data\") pod \"991017af-a60a-4e0b-97ea-be0e196b6742\" (UID: \"991017af-a60a-4e0b-97ea-be0e196b6742\") " Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.817235 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs" (OuterVolumeSpecName: "logs") pod "991017af-a60a-4e0b-97ea-be0e196b6742" (UID: "991017af-a60a-4e0b-97ea-be0e196b6742"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.817432 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/991017af-a60a-4e0b-97ea-be0e196b6742-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.826716 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf" (OuterVolumeSpecName: "kube-api-access-gpscf") pod "991017af-a60a-4e0b-97ea-be0e196b6742" (UID: "991017af-a60a-4e0b-97ea-be0e196b6742"). InnerVolumeSpecName "kube-api-access-gpscf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.921349 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpscf\" (UniqueName: \"kubernetes.io/projected/991017af-a60a-4e0b-97ea-be0e196b6742-kube-api-access-gpscf\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.927587 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "991017af-a60a-4e0b-97ea-be0e196b6742" (UID: "991017af-a60a-4e0b-97ea-be0e196b6742"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:18 crc kubenswrapper[4806]: I1125 15:18:18.932815 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data" (OuterVolumeSpecName: "config-data") pod "991017af-a60a-4e0b-97ea-be0e196b6742" (UID: "991017af-a60a-4e0b-97ea-be0e196b6742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.022989 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.023033 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/991017af-a60a-4e0b-97ea-be0e196b6742-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.187754 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.226205 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf2d8\" (UniqueName: \"kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8\") pod \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.226417 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle\") pod \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.226559 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data\") pod \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\" (UID: \"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a\") " Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.406095 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8" (OuterVolumeSpecName: "kube-api-access-wf2d8") pod "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" (UID: "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a"). InnerVolumeSpecName "kube-api-access-wf2d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.417773 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data" (OuterVolumeSpecName: "config-data") pod "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" (UID: "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.429693 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" (UID: "e4e29fcd-c82a-4d32-ab2d-a115423a7e9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.434743 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.434783 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.434792 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf2d8\" (UniqueName: \"kubernetes.io/projected/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a-kube-api-access-wf2d8\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.575349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e4e29fcd-c82a-4d32-ab2d-a115423a7e9a","Type":"ContainerDied","Data":"f795836b3f7e2e1d5cac31ad4c6b9173db9176dae49e0d3c89fb4f770b398cc7"} Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.575399 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.575444 4806 scope.go:117] "RemoveContainer" containerID="dfca2f3053c56682e13b15887698dca9ed016fa7d520b0d98a88d2b379fbb492" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.582591 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"991017af-a60a-4e0b-97ea-be0e196b6742","Type":"ContainerDied","Data":"8807d939d090201eaca5732b3ea337780c4aa5f67c54df90e38e6c2f32016afe"} Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.582770 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.628952 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.655915 4806 scope.go:117] "RemoveContainer" containerID="8021863560c5f7e246a3ee2feada5957325641a9273d253c13971fdb8fbde77e" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.661461 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702312 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: E1125 15:18:19.702808 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-log" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702821 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-log" Nov 25 15:18:19 crc kubenswrapper[4806]: E1125 15:18:19.702831 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="dnsmasq-dns" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702837 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="dnsmasq-dns" Nov 25 15:18:19 crc kubenswrapper[4806]: E1125 15:18:19.702853 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="init" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702861 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="init" Nov 25 15:18:19 crc kubenswrapper[4806]: E1125 15:18:19.702878 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-api" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702883 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-api" Nov 25 15:18:19 crc kubenswrapper[4806]: E1125 15:18:19.702896 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerName="nova-scheduler-scheduler" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.702902 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerName="nova-scheduler-scheduler" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.703073 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-api" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.703086 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" containerName="nova-api-log" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.703103 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" containerName="dnsmasq-dns" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.703113 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" containerName="nova-scheduler-scheduler" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.703886 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.704458 4806 scope.go:117] "RemoveContainer" containerID="c6a491398ed58a718bf86110a018b8e86cdd981d687f524d47de02431d23fd7f" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.705888 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.737395 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.752235 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.761088 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.761152 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.761240 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtf85\" (UniqueName: \"kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.766694 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.779469 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.781929 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.786203 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.790348 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.872816 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtf85\" (UniqueName: \"kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.872907 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.872948 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kbh\" (UniqueName: \"kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.873027 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.873138 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.873197 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.873300 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.880143 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.881791 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.902037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtf85\" (UniqueName: \"kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85\") pod \"nova-scheduler-0\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " pod="openstack/nova-scheduler-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.975885 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.975986 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.976020 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kbh\" (UniqueName: \"kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.976076 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.976641 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.980250 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.982097 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:19 crc kubenswrapper[4806]: I1125 15:18:19.997310 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kbh\" (UniqueName: \"kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh\") pod \"nova-api-0\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " pod="openstack/nova-api-0" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.034495 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.099419 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.102788 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79229967-32d3-4ca1-ac03-ab3364d41ca5" path="/var/lib/kubelet/pods/79229967-32d3-4ca1-ac03-ab3364d41ca5/volumes" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.103749 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="991017af-a60a-4e0b-97ea-be0e196b6742" path="/var/lib/kubelet/pods/991017af-a60a-4e0b-97ea-be0e196b6742/volumes" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.104364 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4e29fcd-c82a-4d32-ab2d-a115423a7e9a" path="/var/lib/kubelet/pods/e4e29fcd-c82a-4d32-ab2d-a115423a7e9a/volumes" Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.497339 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.497885 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" containerName="kube-state-metrics" containerID="cri-o://cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616" gracePeriod=30 Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.636280 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:18:20 crc kubenswrapper[4806]: I1125 15:18:20.669821 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:20 crc kubenswrapper[4806]: W1125 15:18:20.676482 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice/crio-af2964fbeaf946799f9491e52fc473e208823abd5aff3d4b18a2d403a4d8bb59 WatchSource:0}: Error finding container af2964fbeaf946799f9491e52fc473e208823abd5aff3d4b18a2d403a4d8bb59: Status 404 returned error can't find the container with id af2964fbeaf946799f9491e52fc473e208823abd5aff3d4b18a2d403a4d8bb59 Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.199555 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.309792 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpd2s\" (UniqueName: \"kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s\") pod \"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51\" (UID: \"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51\") " Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.319273 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s" (OuterVolumeSpecName: "kube-api-access-bpd2s") pod "fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" (UID: "fc89f2fe-23ee-4e5a-ba8f-8693fff4da51"). InnerVolumeSpecName "kube-api-access-bpd2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.413217 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpd2s\" (UniqueName: \"kubernetes.io/projected/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51-kube-api-access-bpd2s\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.622834 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8644516-0502-4c72-8daf-954231e7d856","Type":"ContainerStarted","Data":"a9f9911b880c0492199d055a3b2b4e1f1e6b9942f77aa00eabb077bfbcc9bfc7"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.622885 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8644516-0502-4c72-8daf-954231e7d856","Type":"ContainerStarted","Data":"d82f54f87897dc6b80eead71f9e351430ad97179340af2910c46f28858cbf981"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.625895 4806 generic.go:334] "Generic (PLEG): container finished" podID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" containerID="cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616" exitCode=2 Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.625959 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51","Type":"ContainerDied","Data":"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.625990 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fc89f2fe-23ee-4e5a-ba8f-8693fff4da51","Type":"ContainerDied","Data":"49fb9502f25e97d668367e42308c965f982dafabd12c972dacfcb13f7717f89e"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.626063 4806 scope.go:117] "RemoveContainer" containerID="cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.626188 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.630490 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerStarted","Data":"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.630537 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerStarted","Data":"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.630548 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerStarted","Data":"af2964fbeaf946799f9491e52fc473e208823abd5aff3d4b18a2d403a4d8bb59"} Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.673520 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.673493474 podStartE2EDuration="2.673493474s" podCreationTimestamp="2025-11-25 15:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:21.664689241 +0000 UTC m=+1534.316831652" watchObservedRunningTime="2025-11-25 15:18:21.673493474 +0000 UTC m=+1534.325635885" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.710623 4806 scope.go:117] "RemoveContainer" containerID="cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616" Nov 25 15:18:21 crc kubenswrapper[4806]: E1125 15:18:21.714497 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616\": container with ID starting with cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616 not found: ID does not exist" containerID="cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.714555 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616"} err="failed to get container status \"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616\": rpc error: code = NotFound desc = could not find container \"cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616\": container with ID starting with cf8f0241e705081fb0c99432c03e12e4ab25b9c9d5ee3d18a6dc6d839bf2b616 not found: ID does not exist" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.714609 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.742413 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.771458 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:21 crc kubenswrapper[4806]: E1125 15:18:21.772099 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" containerName="kube-state-metrics" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.772117 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" containerName="kube-state-metrics" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.772393 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" containerName="kube-state-metrics" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.773464 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.784760 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.785020 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.873772 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.873754738 podStartE2EDuration="2.873754738s" podCreationTimestamp="2025-11-25 15:18:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:21.733717121 +0000 UTC m=+1534.385859532" watchObservedRunningTime="2025-11-25 15:18:21.873754738 +0000 UTC m=+1534.525897149" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.888556 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.888847 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.888988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sk6w\" (UniqueName: \"kubernetes.io/projected/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-api-access-8sk6w\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.889128 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.948518 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.991348 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk6w\" (UniqueName: \"kubernetes.io/projected/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-api-access-8sk6w\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.991455 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.991546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.991582 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.997210 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.997636 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:21 crc kubenswrapper[4806]: I1125 15:18:21.998642 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c050b95-eb84-4171-a52c-ee1e4614c301-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:22 crc kubenswrapper[4806]: I1125 15:18:22.014999 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk6w\" (UniqueName: \"kubernetes.io/projected/9c050b95-eb84-4171-a52c-ee1e4614c301-kube-api-access-8sk6w\") pod \"kube-state-metrics-0\" (UID: \"9c050b95-eb84-4171-a52c-ee1e4614c301\") " pod="openstack/kube-state-metrics-0" Nov 25 15:18:22 crc kubenswrapper[4806]: I1125 15:18:22.108193 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc89f2fe-23ee-4e5a-ba8f-8693fff4da51" path="/var/lib/kubelet/pods/fc89f2fe-23ee-4e5a-ba8f-8693fff4da51/volumes" Nov 25 15:18:22 crc kubenswrapper[4806]: I1125 15:18:22.186846 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:18:22 crc kubenswrapper[4806]: I1125 15:18:22.824185 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:18:22 crc kubenswrapper[4806]: W1125 15:18:22.830470 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c050b95_eb84_4171_a52c_ee1e4614c301.slice/crio-f900b2b5edec7751b3bd2a0bcf32cd3ece134ac35b7465d5c7e03f8a835475a2 WatchSource:0}: Error finding container f900b2b5edec7751b3bd2a0bcf32cd3ece134ac35b7465d5c7e03f8a835475a2: Status 404 returned error can't find the container with id f900b2b5edec7751b3bd2a0bcf32cd3ece134ac35b7465d5c7e03f8a835475a2 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.028979 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.035513 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-central-agent" containerID="cri-o://7af674775fbcc2a8d57d7adae882c91b14c9ef52b330d8f387ff61b1380c8913" gracePeriod=30 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.035669 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="proxy-httpd" containerID="cri-o://fa7d6923be1a003c17b1865ed6b9c51c49958cbfad7ac5311061052305d8557b" gracePeriod=30 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.035724 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="sg-core" containerID="cri-o://0a587abb354d154ccd1c7be46a4a958ef36828c6702d65f3f2275091ace9f013" gracePeriod=30 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.035766 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-notification-agent" containerID="cri-o://e6406ff971d1adca3fd15dec5d6a15c57838e96fca8cd1db81f956eadce857ce" gracePeriod=30 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.657964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerStarted","Data":"cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9"} Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.658333 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerStarted","Data":"f900b2b5edec7751b3bd2a0bcf32cd3ece134ac35b7465d5c7e03f8a835475a2"} Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.659960 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.667810 4806 generic.go:334] "Generic (PLEG): container finished" podID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerID="fa7d6923be1a003c17b1865ed6b9c51c49958cbfad7ac5311061052305d8557b" exitCode=0 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.668068 4806 generic.go:334] "Generic (PLEG): container finished" podID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerID="0a587abb354d154ccd1c7be46a4a958ef36828c6702d65f3f2275091ace9f013" exitCode=2 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.668192 4806 generic.go:334] "Generic (PLEG): container finished" podID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerID="7af674775fbcc2a8d57d7adae882c91b14c9ef52b330d8f387ff61b1380c8913" exitCode=0 Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.667860 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerDied","Data":"fa7d6923be1a003c17b1865ed6b9c51c49958cbfad7ac5311061052305d8557b"} Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.668414 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerDied","Data":"0a587abb354d154ccd1c7be46a4a958ef36828c6702d65f3f2275091ace9f013"} Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.668517 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerDied","Data":"7af674775fbcc2a8d57d7adae882c91b14c9ef52b330d8f387ff61b1380c8913"} Nov 25 15:18:23 crc kubenswrapper[4806]: I1125 15:18:23.693442 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.233980256 podStartE2EDuration="2.693423016s" podCreationTimestamp="2025-11-25 15:18:21 +0000 UTC" firstStartedPulling="2025-11-25 15:18:22.833312454 +0000 UTC m=+1535.485454885" lastFinishedPulling="2025-11-25 15:18:23.292755234 +0000 UTC m=+1535.944897645" observedRunningTime="2025-11-25 15:18:23.67541386 +0000 UTC m=+1536.327556291" watchObservedRunningTime="2025-11-25 15:18:23.693423016 +0000 UTC m=+1536.345565427" Nov 25 15:18:25 crc kubenswrapper[4806]: I1125 15:18:25.035054 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:18:25 crc kubenswrapper[4806]: I1125 15:18:25.918036 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.745643 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.749440 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.762087 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.811274 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.811580 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.811616 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4ts\" (UniqueName: \"kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.913874 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.914458 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4ts\" (UniqueName: \"kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.914509 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.914666 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.915151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:27 crc kubenswrapper[4806]: I1125 15:18:27.937256 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4ts\" (UniqueName: \"kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts\") pod \"redhat-marketplace-rhkzb\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:28 crc kubenswrapper[4806]: I1125 15:18:28.073427 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.649860 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.727583 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerStarted","Data":"f24b2667f4cad159e6873e5805c292cb8b9616b03c05ef8cd790e336ef16b5a8"} Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.729690 4806 generic.go:334] "Generic (PLEG): container finished" podID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerID="e6406ff971d1adca3fd15dec5d6a15c57838e96fca8cd1db81f956eadce857ce" exitCode=0 Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.729726 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerDied","Data":"e6406ff971d1adca3fd15dec5d6a15c57838e96fca8cd1db81f956eadce857ce"} Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.729747 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f","Type":"ContainerDied","Data":"6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae"} Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.729759 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cc7ce53474c28eb3e28a8391735d09661723e48bb3fec6dae24364c9d85ddae" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.905675 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987327 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987424 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987472 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987510 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987711 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987782 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpfsd\" (UniqueName: \"kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.987901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data\") pod \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\" (UID: \"fa1fb8ba-bc56-42e7-8efa-3caf37784c8f\") " Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.990713 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.993993 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.999551 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd" (OuterVolumeSpecName: "kube-api-access-zpfsd") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "kube-api-access-zpfsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:29 crc kubenswrapper[4806]: I1125 15:18:29.999805 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts" (OuterVolumeSpecName: "scripts") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.034763 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.046427 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.083825 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.094213 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.094249 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.094260 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.094267 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.094276 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpfsd\" (UniqueName: \"kubernetes.io/projected/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-kube-api-access-zpfsd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.119409 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.127240 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data" (OuterVolumeSpecName: "config-data") pod "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" (UID: "fa1fb8ba-bc56-42e7-8efa-3caf37784c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.196746 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.196791 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.274130 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.274206 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.742911 4806 generic.go:334] "Generic (PLEG): container finished" podID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerID="96ea2f647d9838dca015dc4b099034e55397d347b90f2676b052eea6df169f49" exitCode=0 Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.744517 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerDied","Data":"96ea2f647d9838dca015dc4b099034e55397d347b90f2676b052eea6df169f49"} Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.744598 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.799267 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.805491 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.821579 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.835322 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:30 crc kubenswrapper[4806]: E1125 15:18:30.835916 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-notification-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.835942 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-notification-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: E1125 15:18:30.835969 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-central-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.835977 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-central-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: E1125 15:18:30.836013 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="sg-core" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836021 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="sg-core" Nov 25 15:18:30 crc kubenswrapper[4806]: E1125 15:18:30.836041 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="proxy-httpd" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836048 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="proxy-httpd" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836226 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="proxy-httpd" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836259 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-central-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836272 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="sg-core" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.836280 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" containerName="ceilometer-notification-agent" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.838360 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.841454 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.841792 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.842005 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.889236 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.910531 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.910577 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7b7\" (UniqueName: \"kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.910618 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.910998 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.911128 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.911194 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.911298 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:30 crc kubenswrapper[4806]: I1125 15:18:30.911651 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013702 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013770 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013861 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013922 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013951 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z7b7\" (UniqueName: \"kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.013982 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.014081 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.014309 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.014650 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.018974 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.019264 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.019919 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.020571 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.022103 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.035826 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z7b7\" (UniqueName: \"kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7\") pod \"ceilometer-0\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.161405 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.181594 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.181633 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.665343 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:31 crc kubenswrapper[4806]: I1125 15:18:31.755359 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerStarted","Data":"9725acaa119a12279429c49b1126fc3807e58cfe7fadce25ebfd9fb615f32fe7"} Nov 25 15:18:32 crc kubenswrapper[4806]: I1125 15:18:32.103361 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa1fb8ba-bc56-42e7-8efa-3caf37784c8f" path="/var/lib/kubelet/pods/fa1fb8ba-bc56-42e7-8efa-3caf37784c8f/volumes" Nov 25 15:18:32 crc kubenswrapper[4806]: I1125 15:18:32.356834 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 15:18:32 crc kubenswrapper[4806]: I1125 15:18:32.768278 4806 generic.go:334] "Generic (PLEG): container finished" podID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerID="9674fe6e6673831ac81acd91d51e3d71291f24d18c0d116fe52c93724936859a" exitCode=0 Nov 25 15:18:32 crc kubenswrapper[4806]: I1125 15:18:32.768414 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerDied","Data":"9674fe6e6673831ac81acd91d51e3d71291f24d18c0d116fe52c93724936859a"} Nov 25 15:18:33 crc kubenswrapper[4806]: I1125 15:18:33.789990 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerStarted","Data":"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e"} Nov 25 15:18:33 crc kubenswrapper[4806]: I1125 15:18:33.794469 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerStarted","Data":"3ff8fb5961eae5ffe01d518bff28b6243994fa578ea10ad87bf01c1ee0b0ed5f"} Nov 25 15:18:33 crc kubenswrapper[4806]: I1125 15:18:33.810745 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rhkzb" podStartSLOduration=4.239731438 podStartE2EDuration="6.810726429s" podCreationTimestamp="2025-11-25 15:18:27 +0000 UTC" firstStartedPulling="2025-11-25 15:18:30.746360615 +0000 UTC m=+1543.398503026" lastFinishedPulling="2025-11-25 15:18:33.317355606 +0000 UTC m=+1545.969498017" observedRunningTime="2025-11-25 15:18:33.808605578 +0000 UTC m=+1546.460747999" watchObservedRunningTime="2025-11-25 15:18:33.810726429 +0000 UTC m=+1546.462868840" Nov 25 15:18:34 crc kubenswrapper[4806]: I1125 15:18:34.810558 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerStarted","Data":"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875"} Nov 25 15:18:35 crc kubenswrapper[4806]: I1125 15:18:35.822185 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerStarted","Data":"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157"} Nov 25 15:18:36 crc kubenswrapper[4806]: I1125 15:18:36.837584 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerStarted","Data":"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2"} Nov 25 15:18:36 crc kubenswrapper[4806]: I1125 15:18:36.838938 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:18:36 crc kubenswrapper[4806]: I1125 15:18:36.875522 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.260286844 podStartE2EDuration="6.875494964s" podCreationTimestamp="2025-11-25 15:18:30 +0000 UTC" firstStartedPulling="2025-11-25 15:18:31.669942959 +0000 UTC m=+1544.322085370" lastFinishedPulling="2025-11-25 15:18:36.285151079 +0000 UTC m=+1548.937293490" observedRunningTime="2025-11-25 15:18:36.866783644 +0000 UTC m=+1549.518926075" watchObservedRunningTime="2025-11-25 15:18:36.875494964 +0000 UTC m=+1549.527637375" Nov 25 15:18:37 crc kubenswrapper[4806]: I1125 15:18:37.853731 4806 generic.go:334] "Generic (PLEG): container finished" podID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerID="f0432f1aad9274a36760c8e88ade17e9aa79449723fb51c4959722204db12ad4" exitCode=137 Nov 25 15:18:37 crc kubenswrapper[4806]: I1125 15:18:37.853999 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerDied","Data":"f0432f1aad9274a36760c8e88ade17e9aa79449723fb51c4959722204db12ad4"} Nov 25 15:18:37 crc kubenswrapper[4806]: I1125 15:18:37.855976 4806 generic.go:334] "Generic (PLEG): container finished" podID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" containerID="b947ddc8cce612c9e97dd1a056538aa65cc81fbdcf53f5d04d73fecc46802437" exitCode=137 Nov 25 15:18:37 crc kubenswrapper[4806]: I1125 15:18:37.856009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"827f0f62-0f25-4c2c-9b0b-b0233cecc48e","Type":"ContainerDied","Data":"b947ddc8cce612c9e97dd1a056538aa65cc81fbdcf53f5d04d73fecc46802437"} Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.024309 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.073904 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.073982 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.140622 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.217114 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle\") pod \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.218033 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76ccs\" (UniqueName: \"kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs\") pod \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.218142 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data\") pod \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\" (UID: \"827f0f62-0f25-4c2c-9b0b-b0233cecc48e\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.225120 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs" (OuterVolumeSpecName: "kube-api-access-76ccs") pod "827f0f62-0f25-4c2c-9b0b-b0233cecc48e" (UID: "827f0f62-0f25-4c2c-9b0b-b0233cecc48e"). InnerVolumeSpecName "kube-api-access-76ccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.258152 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "827f0f62-0f25-4c2c-9b0b-b0233cecc48e" (UID: "827f0f62-0f25-4c2c-9b0b-b0233cecc48e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.270230 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data" (OuterVolumeSpecName: "config-data") pod "827f0f62-0f25-4c2c-9b0b-b0233cecc48e" (UID: "827f0f62-0f25-4c2c-9b0b-b0233cecc48e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.295858 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.322425 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76ccs\" (UniqueName: \"kubernetes.io/projected/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-kube-api-access-76ccs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.322472 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.322489 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/827f0f62-0f25-4c2c-9b0b-b0233cecc48e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.423867 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle\") pod \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.423976 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs\") pod \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.424012 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fnrs\" (UniqueName: \"kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs\") pod \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.424430 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data\") pod \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\" (UID: \"36d70a3c-4782-4b4a-a8da-89cfff59cf41\") " Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.426709 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs" (OuterVolumeSpecName: "logs") pod "36d70a3c-4782-4b4a-a8da-89cfff59cf41" (UID: "36d70a3c-4782-4b4a-a8da-89cfff59cf41"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.432168 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs" (OuterVolumeSpecName: "kube-api-access-4fnrs") pod "36d70a3c-4782-4b4a-a8da-89cfff59cf41" (UID: "36d70a3c-4782-4b4a-a8da-89cfff59cf41"). InnerVolumeSpecName "kube-api-access-4fnrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.456839 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data" (OuterVolumeSpecName: "config-data") pod "36d70a3c-4782-4b4a-a8da-89cfff59cf41" (UID: "36d70a3c-4782-4b4a-a8da-89cfff59cf41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.461005 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36d70a3c-4782-4b4a-a8da-89cfff59cf41" (UID: "36d70a3c-4782-4b4a-a8da-89cfff59cf41"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.526622 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.526664 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36d70a3c-4782-4b4a-a8da-89cfff59cf41-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.526678 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36d70a3c-4782-4b4a-a8da-89cfff59cf41-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.526690 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fnrs\" (UniqueName: \"kubernetes.io/projected/36d70a3c-4782-4b4a-a8da-89cfff59cf41-kube-api-access-4fnrs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.867790 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.868758 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36d70a3c-4782-4b4a-a8da-89cfff59cf41","Type":"ContainerDied","Data":"20728b6653f7ff7ac15f5dd72d35890d361cf0c506529564b9c1fdc977d5ffe8"} Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.868879 4806 scope.go:117] "RemoveContainer" containerID="f0432f1aad9274a36760c8e88ade17e9aa79449723fb51c4959722204db12ad4" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.873050 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"827f0f62-0f25-4c2c-9b0b-b0233cecc48e","Type":"ContainerDied","Data":"f22b8f11f6bafd7b837620a76ff76da752e4bd8b694af58b96b2b79e9c94b929"} Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.873155 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.904687 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.920755 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.923722 4806 scope.go:117] "RemoveContainer" containerID="2679f0098c99135e253faefe284b114a25ed628af8971f7f93b3f803f4c2fcc1" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.932637 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.946155 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.979733 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:38 crc kubenswrapper[4806]: E1125 15:18:38.980555 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-log" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980574 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-log" Nov 25 15:18:38 crc kubenswrapper[4806]: E1125 15:18:38.980613 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980621 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:18:38 crc kubenswrapper[4806]: E1125 15:18:38.980644 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-metadata" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980650 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-metadata" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980887 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-metadata" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980925 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" containerName="nova-metadata-log" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.980937 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.982210 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.986742 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:18:38 crc kubenswrapper[4806]: I1125 15:18:38.990993 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.006873 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.009411 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.024135 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.026158 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.028945 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.029179 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.029396 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.060108 4806 scope.go:117] "RemoveContainer" containerID="b947ddc8cce612c9e97dd1a056538aa65cc81fbdcf53f5d04d73fecc46802437" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.066960 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.149852 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.165875 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.165958 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5zgm\" (UniqueName: \"kubernetes.io/projected/f96a2277-fc94-465c-beae-9461e69ef4e3-kube-api-access-q5zgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.165988 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166010 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166105 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz8ns\" (UniqueName: \"kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166136 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166198 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166269 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166374 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.166624 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.269029 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.269344 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270146 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270303 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270432 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270534 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270596 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5zgm\" (UniqueName: \"kubernetes.io/projected/f96a2277-fc94-465c-beae-9461e69ef4e3-kube-api-access-q5zgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.270625 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.271669 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz8ns\" (UniqueName: \"kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.271775 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.271925 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.276417 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.277152 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.277161 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.278758 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.278951 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.286198 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.293308 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f96a2277-fc94-465c-beae-9461e69ef4e3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.297722 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5zgm\" (UniqueName: \"kubernetes.io/projected/f96a2277-fc94-465c-beae-9461e69ef4e3-kube-api-access-q5zgm\") pod \"nova-cell1-novncproxy-0\" (UID: \"f96a2277-fc94-465c-beae-9461e69ef4e3\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.303003 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz8ns\" (UniqueName: \"kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns\") pod \"nova-metadata-0\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.320806 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.386824 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.878912 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:18:39 crc kubenswrapper[4806]: I1125 15:18:39.981033 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:18:39 crc kubenswrapper[4806]: W1125 15:18:39.987246 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf96a2277_fc94_465c_beae_9461e69ef4e3.slice/crio-838e4e17b0f3ebfef7b00479242f600600e5a65457b980b3e6b89c35f26d8716 WatchSource:0}: Error finding container 838e4e17b0f3ebfef7b00479242f600600e5a65457b980b3e6b89c35f26d8716: Status 404 returned error can't find the container with id 838e4e17b0f3ebfef7b00479242f600600e5a65457b980b3e6b89c35f26d8716 Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.100966 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d70a3c-4782-4b4a-a8da-89cfff59cf41" path="/var/lib/kubelet/pods/36d70a3c-4782-4b4a-a8da-89cfff59cf41/volumes" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.101870 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="827f0f62-0f25-4c2c-9b0b-b0233cecc48e" path="/var/lib/kubelet/pods/827f0f62-0f25-4c2c-9b0b-b0233cecc48e/volumes" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.104556 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.105195 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.106132 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.109219 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.914299 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f96a2277-fc94-465c-beae-9461e69ef4e3","Type":"ContainerStarted","Data":"6c68d97243fbed7f9de6e620a8c2be8b64427ab0bc946ec86e90245a868504b9"} Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.914747 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f96a2277-fc94-465c-beae-9461e69ef4e3","Type":"ContainerStarted","Data":"838e4e17b0f3ebfef7b00479242f600600e5a65457b980b3e6b89c35f26d8716"} Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.922699 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerStarted","Data":"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35"} Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.922740 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerStarted","Data":"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a"} Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.922753 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerStarted","Data":"fffa68be3649e2f080a2101b7c29cdee6c0d5a23825d1ff4cce277aa6e6c1cc8"} Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.922765 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.922900 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rhkzb" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="registry-server" containerID="cri-o://3ff8fb5961eae5ffe01d518bff28b6243994fa578ea10ad87bf01c1ee0b0ed5f" gracePeriod=2 Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.926863 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.959465 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.959443525 podStartE2EDuration="2.959443525s" podCreationTimestamp="2025-11-25 15:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:40.937484135 +0000 UTC m=+1553.589626556" watchObservedRunningTime="2025-11-25 15:18:40.959443525 +0000 UTC m=+1553.611585936" Nov 25 15:18:40 crc kubenswrapper[4806]: I1125 15:18:40.986304 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.986283675 podStartE2EDuration="2.986283675s" podCreationTimestamp="2025-11-25 15:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:40.980156899 +0000 UTC m=+1553.632299310" watchObservedRunningTime="2025-11-25 15:18:40.986283675 +0000 UTC m=+1553.638426086" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.145655 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.147526 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.181774 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241405 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241539 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmp2\" (UniqueName: \"kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241618 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241642 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241684 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.241737 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354252 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmp2\" (UniqueName: \"kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354388 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354408 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354438 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.354474 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.355180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.355223 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.355749 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.358039 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.358806 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.382777 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmp2\" (UniqueName: \"kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2\") pod \"dnsmasq-dns-5fd9b586ff-h9svs\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.524715 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.958068 4806 generic.go:334] "Generic (PLEG): container finished" podID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerID="3ff8fb5961eae5ffe01d518bff28b6243994fa578ea10ad87bf01c1ee0b0ed5f" exitCode=0 Nov 25 15:18:41 crc kubenswrapper[4806]: I1125 15:18:41.958426 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerDied","Data":"3ff8fb5961eae5ffe01d518bff28b6243994fa578ea10ad87bf01c1ee0b0ed5f"} Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.115514 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.181095 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities\") pod \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.181702 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br4ts\" (UniqueName: \"kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts\") pod \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.181883 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities" (OuterVolumeSpecName: "utilities") pod "67ac4e7e-05a4-4a7d-add3-4ef6314354f3" (UID: "67ac4e7e-05a4-4a7d-add3-4ef6314354f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.182155 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content\") pod \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\" (UID: \"67ac4e7e-05a4-4a7d-add3-4ef6314354f3\") " Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.183026 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.205190 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67ac4e7e-05a4-4a7d-add3-4ef6314354f3" (UID: "67ac4e7e-05a4-4a7d-add3-4ef6314354f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.206562 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts" (OuterVolumeSpecName: "kube-api-access-br4ts") pod "67ac4e7e-05a4-4a7d-add3-4ef6314354f3" (UID: "67ac4e7e-05a4-4a7d-add3-4ef6314354f3"). InnerVolumeSpecName "kube-api-access-br4ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.262212 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.285156 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.285187 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-br4ts\" (UniqueName: \"kubernetes.io/projected/67ac4e7e-05a4-4a7d-add3-4ef6314354f3-kube-api-access-br4ts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.971095 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rhkzb" event={"ID":"67ac4e7e-05a4-4a7d-add3-4ef6314354f3","Type":"ContainerDied","Data":"f24b2667f4cad159e6873e5805c292cb8b9616b03c05ef8cd790e336ef16b5a8"} Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.971450 4806 scope.go:117] "RemoveContainer" containerID="3ff8fb5961eae5ffe01d518bff28b6243994fa578ea10ad87bf01c1ee0b0ed5f" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.971668 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rhkzb" Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.976997 4806 generic.go:334] "Generic (PLEG): container finished" podID="ded52426-67c6-4765-93c7-c193a74862ec" containerID="16879783e7bd6ada607271c4d7827261f99811ee8b1ef9287ac60480176f870e" exitCode=0 Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.977138 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" event={"ID":"ded52426-67c6-4765-93c7-c193a74862ec","Type":"ContainerDied","Data":"16879783e7bd6ada607271c4d7827261f99811ee8b1ef9287ac60480176f870e"} Nov 25 15:18:42 crc kubenswrapper[4806]: I1125 15:18:42.977179 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" event={"ID":"ded52426-67c6-4765-93c7-c193a74862ec","Type":"ContainerStarted","Data":"3689838166d34c10a26164df99c902672cfdf93560c5457d53a7639ab0dc54d2"} Nov 25 15:18:43 crc kubenswrapper[4806]: I1125 15:18:43.004244 4806 scope.go:117] "RemoveContainer" containerID="9674fe6e6673831ac81acd91d51e3d71291f24d18c0d116fe52c93724936859a" Nov 25 15:18:43 crc kubenswrapper[4806]: I1125 15:18:43.145619 4806 scope.go:117] "RemoveContainer" containerID="96ea2f647d9838dca015dc4b099034e55397d347b90f2676b052eea6df169f49" Nov 25 15:18:43 crc kubenswrapper[4806]: I1125 15:18:43.260179 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:43 crc kubenswrapper[4806]: I1125 15:18:43.270611 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rhkzb"] Nov 25 15:18:43 crc kubenswrapper[4806]: I1125 15:18:43.786251 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.026169 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" event={"ID":"ded52426-67c6-4765-93c7-c193a74862ec","Type":"ContainerStarted","Data":"07d2059aa35663669eea78948442e10ca03fa26719b80bd703de3fabdabed1d6"} Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.026529 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.037693 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-log" containerID="cri-o://93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a" gracePeriod=30 Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.037825 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-api" containerID="cri-o://7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e" gracePeriod=30 Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.067994 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" podStartSLOduration=3.067974905 podStartE2EDuration="3.067974905s" podCreationTimestamp="2025-11-25 15:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:44.065564566 +0000 UTC m=+1556.717706987" watchObservedRunningTime="2025-11-25 15:18:44.067974905 +0000 UTC m=+1556.720117316" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.119988 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" path="/var/lib/kubelet/pods/67ac4e7e-05a4-4a7d-add3-4ef6314354f3/volumes" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.321491 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.321572 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.387858 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.556522 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.556800 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-central-agent" containerID="cri-o://d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e" gracePeriod=30 Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.557112 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="sg-core" containerID="cri-o://a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157" gracePeriod=30 Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.557114 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-notification-agent" containerID="cri-o://5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875" gracePeriod=30 Nov 25 15:18:44 crc kubenswrapper[4806]: I1125 15:18:44.557114 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="proxy-httpd" containerID="cri-o://896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2" gracePeriod=30 Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053498 4806 generic.go:334] "Generic (PLEG): container finished" podID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerID="896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2" exitCode=0 Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053535 4806 generic.go:334] "Generic (PLEG): container finished" podID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerID="a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157" exitCode=2 Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053544 4806 generic.go:334] "Generic (PLEG): container finished" podID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerID="d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e" exitCode=0 Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053596 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerDied","Data":"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2"} Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053626 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerDied","Data":"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157"} Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.053639 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerDied","Data":"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e"} Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.055555 4806 generic.go:334] "Generic (PLEG): container finished" podID="569f4221-7042-41a7-a783-a975cc7a02b4" containerID="93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a" exitCode=143 Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.056736 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerDied","Data":"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a"} Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.555374 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.669685 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670109 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670278 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670367 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670543 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z7b7\" (UniqueName: \"kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670632 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.670797 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.671238 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.671350 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle\") pod \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\" (UID: \"63a51daa-d61f-4f42-8b31-ff644dfae8c8\") " Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.671536 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.672103 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.672199 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/63a51daa-d61f-4f42-8b31-ff644dfae8c8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.675892 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts" (OuterVolumeSpecName: "scripts") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.675947 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7" (OuterVolumeSpecName: "kube-api-access-6z7b7") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "kube-api-access-6z7b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.722665 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.743214 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.775486 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.775738 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.775747 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.775756 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z7b7\" (UniqueName: \"kubernetes.io/projected/63a51daa-d61f-4f42-8b31-ff644dfae8c8-kube-api-access-6z7b7\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.809080 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.841564 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data" (OuterVolumeSpecName: "config-data") pod "63a51daa-d61f-4f42-8b31-ff644dfae8c8" (UID: "63a51daa-d61f-4f42-8b31-ff644dfae8c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.878407 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:45 crc kubenswrapper[4806]: I1125 15:18:45.878460 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63a51daa-d61f-4f42-8b31-ff644dfae8c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.067913 4806 generic.go:334] "Generic (PLEG): container finished" podID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerID="5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875" exitCode=0 Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.067956 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerDied","Data":"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875"} Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.068003 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.068022 4806 scope.go:117] "RemoveContainer" containerID="896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.068008 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"63a51daa-d61f-4f42-8b31-ff644dfae8c8","Type":"ContainerDied","Data":"9725acaa119a12279429c49b1126fc3807e58cfe7fadce25ebfd9fb615f32fe7"} Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.088550 4806 scope.go:117] "RemoveContainer" containerID="a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.110256 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.124541 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.135505 4806 scope.go:117] "RemoveContainer" containerID="5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.148529 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149120 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="registry-server" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149143 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="registry-server" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149165 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-central-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149173 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-central-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149190 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="sg-core" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149197 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="sg-core" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149214 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="proxy-httpd" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149221 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="proxy-httpd" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149233 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="extract-content" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149241 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="extract-content" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149265 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-notification-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149272 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-notification-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.149290 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="extract-utilities" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149297 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="extract-utilities" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149587 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-notification-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149611 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="sg-core" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149627 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="proxy-httpd" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149639 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" containerName="ceilometer-central-agent" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.149655 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="67ac4e7e-05a4-4a7d-add3-4ef6314354f3" containerName="registry-server" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.157235 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.161212 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.161458 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.166423 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.172876 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.175497 4806 scope.go:117] "RemoveContainer" containerID="d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.185987 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186175 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186204 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186222 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186258 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhmj5\" (UniqueName: \"kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186308 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.186369 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.207772 4806 scope.go:117] "RemoveContainer" containerID="896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.208248 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2\": container with ID starting with 896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2 not found: ID does not exist" containerID="896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.208352 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2"} err="failed to get container status \"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2\": rpc error: code = NotFound desc = could not find container \"896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2\": container with ID starting with 896bfb4ee151cf83987529d7ad7e9283ab12a05e36276a11206bc8f42e274dc2 not found: ID does not exist" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.208386 4806 scope.go:117] "RemoveContainer" containerID="a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.208608 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157\": container with ID starting with a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157 not found: ID does not exist" containerID="a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.208630 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157"} err="failed to get container status \"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157\": rpc error: code = NotFound desc = could not find container \"a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157\": container with ID starting with a0effdc66f7443fce3e418e6182d0591f4106a576fae59a3790dc2ba73473157 not found: ID does not exist" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.208642 4806 scope.go:117] "RemoveContainer" containerID="5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.209111 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875\": container with ID starting with 5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875 not found: ID does not exist" containerID="5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.209131 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875"} err="failed to get container status \"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875\": rpc error: code = NotFound desc = could not find container \"5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875\": container with ID starting with 5f7d63468f6d2598c510f817dcfc899920f65fd6e1cc37483f2ecd55aea0b875 not found: ID does not exist" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.209143 4806 scope.go:117] "RemoveContainer" containerID="d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e" Nov 25 15:18:46 crc kubenswrapper[4806]: E1125 15:18:46.209497 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e\": container with ID starting with d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e not found: ID does not exist" containerID="d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.209540 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e"} err="failed to get container status \"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e\": rpc error: code = NotFound desc = could not find container \"d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e\": container with ID starting with d87fe72d04ac12eb528c95e9f55da7a3940b2fb0c86aa7e2187ddc2641a30c3e not found: ID does not exist" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288396 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288569 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288597 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288624 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288646 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.288674 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhmj5\" (UniqueName: \"kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.289181 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.289489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.289634 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.291876 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.293359 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.293485 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.293636 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.295367 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.308219 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.308819 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhmj5\" (UniqueName: \"kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5\") pod \"ceilometer-0\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " pod="openstack/ceilometer-0" Nov 25 15:18:46 crc kubenswrapper[4806]: I1125 15:18:46.482338 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:47 crc kubenswrapper[4806]: I1125 15:18:47.034926 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:47 crc kubenswrapper[4806]: I1125 15:18:47.148367 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:47.999129 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.034542 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs\") pod \"569f4221-7042-41a7-a783-a975cc7a02b4\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.035162 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs" (OuterVolumeSpecName: "logs") pod "569f4221-7042-41a7-a783-a975cc7a02b4" (UID: "569f4221-7042-41a7-a783-a975cc7a02b4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.035572 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle\") pod \"569f4221-7042-41a7-a783-a975cc7a02b4\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.038430 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data\") pod \"569f4221-7042-41a7-a783-a975cc7a02b4\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.038714 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kbh\" (UniqueName: \"kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh\") pod \"569f4221-7042-41a7-a783-a975cc7a02b4\" (UID: \"569f4221-7042-41a7-a783-a975cc7a02b4\") " Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.039816 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569f4221-7042-41a7-a783-a975cc7a02b4-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.045736 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh" (OuterVolumeSpecName: "kube-api-access-j6kbh") pod "569f4221-7042-41a7-a783-a975cc7a02b4" (UID: "569f4221-7042-41a7-a783-a975cc7a02b4"). InnerVolumeSpecName "kube-api-access-j6kbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.072501 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "569f4221-7042-41a7-a783-a975cc7a02b4" (UID: "569f4221-7042-41a7-a783-a975cc7a02b4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.086169 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data" (OuterVolumeSpecName: "config-data") pod "569f4221-7042-41a7-a783-a975cc7a02b4" (UID: "569f4221-7042-41a7-a783-a975cc7a02b4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.112254 4806 generic.go:334] "Generic (PLEG): container finished" podID="569f4221-7042-41a7-a783-a975cc7a02b4" containerID="7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e" exitCode=0 Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.112392 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.114740 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63a51daa-d61f-4f42-8b31-ff644dfae8c8" path="/var/lib/kubelet/pods/63a51daa-d61f-4f42-8b31-ff644dfae8c8/volumes" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.116112 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerStarted","Data":"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57"} Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.129515 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerStarted","Data":"51a6512fc770460fd942e80f523ec5269a121670bcca6cc44aebee3a149903c2"} Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.129624 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerDied","Data":"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e"} Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.129649 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"569f4221-7042-41a7-a783-a975cc7a02b4","Type":"ContainerDied","Data":"af2964fbeaf946799f9491e52fc473e208823abd5aff3d4b18a2d403a4d8bb59"} Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.129668 4806 scope.go:117] "RemoveContainer" containerID="7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.142689 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.142716 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kbh\" (UniqueName: \"kubernetes.io/projected/569f4221-7042-41a7-a783-a975cc7a02b4-kube-api-access-j6kbh\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.142727 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569f4221-7042-41a7-a783-a975cc7a02b4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.169492 4806 scope.go:117] "RemoveContainer" containerID="93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.191394 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.214173 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.215523 4806 scope.go:117] "RemoveContainer" containerID="7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e" Nov 25 15:18:48 crc kubenswrapper[4806]: E1125 15:18:48.219513 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e\": container with ID starting with 7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e not found: ID does not exist" containerID="7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.219569 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e"} err="failed to get container status \"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e\": rpc error: code = NotFound desc = could not find container \"7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e\": container with ID starting with 7debb5c6cee01da4b76ce376d55b4ebf95eae91063b809de37ca9f19b0c8ee5e not found: ID does not exist" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.219601 4806 scope.go:117] "RemoveContainer" containerID="93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a" Nov 25 15:18:48 crc kubenswrapper[4806]: E1125 15:18:48.220068 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a\": container with ID starting with 93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a not found: ID does not exist" containerID="93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.220098 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a"} err="failed to get container status \"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a\": rpc error: code = NotFound desc = could not find container \"93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a\": container with ID starting with 93f33b3c2563cd455fd2c1dd33d6f04af425be2cf9ad96027c69e55c5b0ae43a not found: ID does not exist" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.229490 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:48 crc kubenswrapper[4806]: E1125 15:18:48.229970 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-log" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.229988 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-log" Nov 25 15:18:48 crc kubenswrapper[4806]: E1125 15:18:48.230013 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-api" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.230020 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-api" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.230227 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-log" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.230242 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" containerName="nova-api-api" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.231558 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.236606 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.237003 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.237113 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.240939 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244535 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244586 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244647 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244713 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244762 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98j75\" (UniqueName: \"kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.244875 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346727 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346775 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346815 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346865 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346896 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98j75\" (UniqueName: \"kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.346967 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.348072 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.351697 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.352175 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.352809 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.357024 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.365275 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98j75\" (UniqueName: \"kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75\") pod \"nova-api-0\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " pod="openstack/nova-api-0" Nov 25 15:18:48 crc kubenswrapper[4806]: I1125 15:18:48.559485 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:18:49 crc kubenswrapper[4806]: I1125 15:18:49.102376 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:18:49 crc kubenswrapper[4806]: I1125 15:18:49.321852 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:18:49 crc kubenswrapper[4806]: I1125 15:18:49.322682 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:18:49 crc kubenswrapper[4806]: I1125 15:18:49.388502 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:49 crc kubenswrapper[4806]: I1125 15:18:49.638224 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.103934 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569f4221-7042-41a7-a783-a975cc7a02b4" path="/var/lib/kubelet/pods/569f4221-7042-41a7-a783-a975cc7a02b4/volumes" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.152601 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerStarted","Data":"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f"} Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.157288 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerStarted","Data":"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a"} Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.157355 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerStarted","Data":"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca"} Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.157373 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerStarted","Data":"bb1c94497463bb7ca0d4b0564a3137fbb8010f4e9035b192a44f52718ac5f321"} Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.172963 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.183628 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.183609905 podStartE2EDuration="2.183609905s" podCreationTimestamp="2025-11-25 15:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:50.173973159 +0000 UTC m=+1562.826115580" watchObservedRunningTime="2025-11-25 15:18:50.183609905 +0000 UTC m=+1562.835752326" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.343747 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.344167 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.347673 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-z7hfp"] Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.349598 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.352823 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.353091 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.357293 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7hfp"] Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.537459 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.537503 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.537594 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g99fn\" (UniqueName: \"kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.537623 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.639822 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g99fn\" (UniqueName: \"kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.639894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.639992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.640012 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.645794 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.646812 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.657255 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.658849 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g99fn\" (UniqueName: \"kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn\") pod \"nova-cell1-cell-mapping-z7hfp\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:50 crc kubenswrapper[4806]: I1125 15:18:50.675407 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.184453 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerStarted","Data":"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313"} Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.311291 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7hfp"] Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.526329 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.614726 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.615164 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="dnsmasq-dns" containerID="cri-o://7c185c509fb62faef23709ffdf315342020b04abe733d2d1b719d898488b3973" gracePeriod=10 Nov 25 15:18:51 crc kubenswrapper[4806]: I1125 15:18:51.812278 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: connect: connection refused" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.195682 4806 generic.go:334] "Generic (PLEG): container finished" podID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerID="7c185c509fb62faef23709ffdf315342020b04abe733d2d1b719d898488b3973" exitCode=0 Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.195963 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" event={"ID":"e5ee1a03-d818-4e64-84d4-a742cbb51c50","Type":"ContainerDied","Data":"7c185c509fb62faef23709ffdf315342020b04abe733d2d1b719d898488b3973"} Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.197664 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7hfp" event={"ID":"b8c067c8-89e6-4c27-b894-09ea261d2033","Type":"ContainerStarted","Data":"08f6ce0a3f57746056978a8137cdc1c12db6ae61996dc18e050cfd898ca45d62"} Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.197695 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7hfp" event={"ID":"b8c067c8-89e6-4c27-b894-09ea261d2033","Type":"ContainerStarted","Data":"bf6e21a0aa4a1e89ff6a2d846d74ac1b44d7286a80c0cc97a7ec478e53e45bb2"} Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.218584 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-z7hfp" podStartSLOduration=2.218563809 podStartE2EDuration="2.218563809s" podCreationTimestamp="2025-11-25 15:18:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:52.217257731 +0000 UTC m=+1564.869400142" watchObservedRunningTime="2025-11-25 15:18:52.218563809 +0000 UTC m=+1564.870706220" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.391069 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.589959 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.590112 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.590200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.590426 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.590456 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx2jj\" (UniqueName: \"kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.590502 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0\") pod \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\" (UID: \"e5ee1a03-d818-4e64-84d4-a742cbb51c50\") " Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.605254 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj" (OuterVolumeSpecName: "kube-api-access-rx2jj") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "kube-api-access-rx2jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.659560 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.663143 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config" (OuterVolumeSpecName: "config") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.694606 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.694635 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx2jj\" (UniqueName: \"kubernetes.io/projected/e5ee1a03-d818-4e64-84d4-a742cbb51c50-kube-api-access-rx2jj\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.694647 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.696613 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.702752 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.741015 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e5ee1a03-d818-4e64-84d4-a742cbb51c50" (UID: "e5ee1a03-d818-4e64-84d4-a742cbb51c50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.796345 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.796390 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:52 crc kubenswrapper[4806]: I1125 15:18:52.796400 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e5ee1a03-d818-4e64-84d4-a742cbb51c50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.208928 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" event={"ID":"e5ee1a03-d818-4e64-84d4-a742cbb51c50","Type":"ContainerDied","Data":"eada171e1e48479ef9ff931798b75a10fab9d79680d414d894fe025687b542ad"} Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.208964 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-hcqg2" Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.209006 4806 scope.go:117] "RemoveContainer" containerID="7c185c509fb62faef23709ffdf315342020b04abe733d2d1b719d898488b3973" Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.232103 4806 scope.go:117] "RemoveContainer" containerID="682701d0db2b15f949f19751787840443b9e053f3f775f2ee00da94f8bb493f2" Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.252328 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:53 crc kubenswrapper[4806]: I1125 15:18:53.264653 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-hcqg2"] Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.103780 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" path="/var/lib/kubelet/pods/e5ee1a03-d818-4e64-84d4-a742cbb51c50/volumes" Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223030 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerStarted","Data":"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6"} Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223206 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-central-agent" containerID="cri-o://087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57" gracePeriod=30 Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223234 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="proxy-httpd" containerID="cri-o://f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6" gracePeriod=30 Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223403 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-notification-agent" containerID="cri-o://db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f" gracePeriod=30 Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223464 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="sg-core" containerID="cri-o://876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313" gracePeriod=30 Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.223546 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:18:54 crc kubenswrapper[4806]: I1125 15:18:54.253069 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.950340822 podStartE2EDuration="8.25304948s" podCreationTimestamp="2025-11-25 15:18:46 +0000 UTC" firstStartedPulling="2025-11-25 15:18:47.158196629 +0000 UTC m=+1559.810339040" lastFinishedPulling="2025-11-25 15:18:53.460905257 +0000 UTC m=+1566.113047698" observedRunningTime="2025-11-25 15:18:54.248809488 +0000 UTC m=+1566.900951919" watchObservedRunningTime="2025-11-25 15:18:54.25304948 +0000 UTC m=+1566.905191891" Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235189 4806 generic.go:334] "Generic (PLEG): container finished" podID="2131abe6-d84b-4035-b318-f0e7046941fa" containerID="f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6" exitCode=0 Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235524 4806 generic.go:334] "Generic (PLEG): container finished" podID="2131abe6-d84b-4035-b318-f0e7046941fa" containerID="876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313" exitCode=2 Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235536 4806 generic.go:334] "Generic (PLEG): container finished" podID="2131abe6-d84b-4035-b318-f0e7046941fa" containerID="db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f" exitCode=0 Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235261 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerDied","Data":"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6"} Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235584 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerDied","Data":"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313"} Nov 25 15:18:55 crc kubenswrapper[4806]: I1125 15:18:55.235604 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerDied","Data":"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f"} Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.103898 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.206675 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.206734 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.206832 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.206894 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhmj5\" (UniqueName: \"kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.206986 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.207059 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.207095 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.207200 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml\") pod \"2131abe6-d84b-4035-b318-f0e7046941fa\" (UID: \"2131abe6-d84b-4035-b318-f0e7046941fa\") " Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.209175 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.209296 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.217615 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5" (OuterVolumeSpecName: "kube-api-access-lhmj5") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "kube-api-access-lhmj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.220632 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts" (OuterVolumeSpecName: "scripts") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.252028 4806 generic.go:334] "Generic (PLEG): container finished" podID="2131abe6-d84b-4035-b318-f0e7046941fa" containerID="087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57" exitCode=0 Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.252080 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerDied","Data":"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57"} Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.252115 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2131abe6-d84b-4035-b318-f0e7046941fa","Type":"ContainerDied","Data":"51a6512fc770460fd942e80f523ec5269a121670bcca6cc44aebee3a149903c2"} Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.252138 4806 scope.go:117] "RemoveContainer" containerID="f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.252289 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.260173 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.281693 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.300537 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310591 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310631 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhmj5\" (UniqueName: \"kubernetes.io/projected/2131abe6-d84b-4035-b318-f0e7046941fa-kube-api-access-lhmj5\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310648 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310660 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2131abe6-d84b-4035-b318-f0e7046941fa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310673 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310688 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.310701 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.328649 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data" (OuterVolumeSpecName: "config-data") pod "2131abe6-d84b-4035-b318-f0e7046941fa" (UID: "2131abe6-d84b-4035-b318-f0e7046941fa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.352194 4806 scope.go:117] "RemoveContainer" containerID="876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.372855 4806 scope.go:117] "RemoveContainer" containerID="db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.392527 4806 scope.go:117] "RemoveContainer" containerID="087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.412409 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2131abe6-d84b-4035-b318-f0e7046941fa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.413686 4806 scope.go:117] "RemoveContainer" containerID="f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.414143 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6\": container with ID starting with f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6 not found: ID does not exist" containerID="f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414173 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6"} err="failed to get container status \"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6\": rpc error: code = NotFound desc = could not find container \"f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6\": container with ID starting with f783371719803fd439cccf23b096897c774b777d778590583888a0cbf1c5a2d6 not found: ID does not exist" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414194 4806 scope.go:117] "RemoveContainer" containerID="876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.414541 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313\": container with ID starting with 876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313 not found: ID does not exist" containerID="876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414565 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313"} err="failed to get container status \"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313\": rpc error: code = NotFound desc = could not find container \"876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313\": container with ID starting with 876b9550ea9d3257d9b550f5a57bfc0ef051a46b3f017e57db32782414cd0313 not found: ID does not exist" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414577 4806 scope.go:117] "RemoveContainer" containerID="db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.414855 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f\": container with ID starting with db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f not found: ID does not exist" containerID="db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414889 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f"} err="failed to get container status \"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f\": rpc error: code = NotFound desc = could not find container \"db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f\": container with ID starting with db9123270daeb5ea714a4b4f73cd88b9ac115eac8409701ae7916133002af08f not found: ID does not exist" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.414908 4806 scope.go:117] "RemoveContainer" containerID="087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.415147 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57\": container with ID starting with 087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57 not found: ID does not exist" containerID="087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.415167 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57"} err="failed to get container status \"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57\": rpc error: code = NotFound desc = could not find container \"087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57\": container with ID starting with 087c6c36e4645e05188b6bbbaea269fed43d5049c0ccdf0946c143d04c9bdf57 not found: ID does not exist" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.603004 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.636792 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.657798 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659016 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="sg-core" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659055 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="sg-core" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659075 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-notification-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659082 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-notification-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659113 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="dnsmasq-dns" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659120 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="dnsmasq-dns" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659168 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-central-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659174 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-central-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659187 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="proxy-httpd" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659193 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="proxy-httpd" Nov 25 15:18:56 crc kubenswrapper[4806]: E1125 15:18:56.659203 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="init" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659210 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="init" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659592 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="sg-core" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659619 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-central-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659631 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="ceilometer-notification-agent" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659639 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5ee1a03-d818-4e64-84d4-a742cbb51c50" containerName="dnsmasq-dns" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.659648 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" containerName="proxy-httpd" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.661723 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.666167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.666186 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.666338 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.679138 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721255 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721305 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721346 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721368 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721388 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721422 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fqjn\" (UniqueName: \"kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721467 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.721498 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823185 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823269 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823307 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823368 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823404 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823459 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fqjn\" (UniqueName: \"kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823535 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823583 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823851 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.823970 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.829157 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.829184 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.829596 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.844569 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.846010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fqjn\" (UniqueName: \"kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:56 crc kubenswrapper[4806]: I1125 15:18:56.850145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " pod="openstack/ceilometer-0" Nov 25 15:18:57 crc kubenswrapper[4806]: I1125 15:18:57.004239 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:18:57 crc kubenswrapper[4806]: E1125 15:18:57.364331 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:18:57 crc kubenswrapper[4806]: I1125 15:18:57.491930 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:18:58 crc kubenswrapper[4806]: I1125 15:18:58.101210 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2131abe6-d84b-4035-b318-f0e7046941fa" path="/var/lib/kubelet/pods/2131abe6-d84b-4035-b318-f0e7046941fa/volumes" Nov 25 15:18:58 crc kubenswrapper[4806]: I1125 15:18:58.287872 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerStarted","Data":"fca507c4bd8daa7a951c0b3911fcdb2732a2c26edd1c680c59d9c8dc494e458e"} Nov 25 15:18:58 crc kubenswrapper[4806]: I1125 15:18:58.560718 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:58 crc kubenswrapper[4806]: I1125 15:18:58.561525 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.300095 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerStarted","Data":"9ae869aefdb5ad687a31e56088a25b24056315132ad8a0122db7fd764db18842"} Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.302604 4806 generic.go:334] "Generic (PLEG): container finished" podID="b8c067c8-89e6-4c27-b894-09ea261d2033" containerID="08f6ce0a3f57746056978a8137cdc1c12db6ae61996dc18e050cfd898ca45d62" exitCode=0 Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.302677 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7hfp" event={"ID":"b8c067c8-89e6-4c27-b894-09ea261d2033","Type":"ContainerDied","Data":"08f6ce0a3f57746056978a8137cdc1c12db6ae61996dc18e050cfd898ca45d62"} Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.329081 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.332055 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.334039 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.575621 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:18:59 crc kubenswrapper[4806]: I1125 15:18:59.575684 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.317227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerStarted","Data":"30c0d6fcf97bc38ee39c890329d00d65e2051fa3740163231667201e8e7f4130"} Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.325973 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.847588 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.933817 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g99fn\" (UniqueName: \"kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn\") pod \"b8c067c8-89e6-4c27-b894-09ea261d2033\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.934556 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle\") pod \"b8c067c8-89e6-4c27-b894-09ea261d2033\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.934751 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts\") pod \"b8c067c8-89e6-4c27-b894-09ea261d2033\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.934916 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data\") pod \"b8c067c8-89e6-4c27-b894-09ea261d2033\" (UID: \"b8c067c8-89e6-4c27-b894-09ea261d2033\") " Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.942532 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn" (OuterVolumeSpecName: "kube-api-access-g99fn") pod "b8c067c8-89e6-4c27-b894-09ea261d2033" (UID: "b8c067c8-89e6-4c27-b894-09ea261d2033"). InnerVolumeSpecName "kube-api-access-g99fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.944686 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts" (OuterVolumeSpecName: "scripts") pod "b8c067c8-89e6-4c27-b894-09ea261d2033" (UID: "b8c067c8-89e6-4c27-b894-09ea261d2033"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.964091 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8c067c8-89e6-4c27-b894-09ea261d2033" (UID: "b8c067c8-89e6-4c27-b894-09ea261d2033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:00 crc kubenswrapper[4806]: I1125 15:19:00.973731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data" (OuterVolumeSpecName: "config-data") pod "b8c067c8-89e6-4c27-b894-09ea261d2033" (UID: "b8c067c8-89e6-4c27-b894-09ea261d2033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.037375 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.037409 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g99fn\" (UniqueName: \"kubernetes.io/projected/b8c067c8-89e6-4c27-b894-09ea261d2033-kube-api-access-g99fn\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.037419 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.037429 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8c067c8-89e6-4c27-b894-09ea261d2033-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.330439 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-z7hfp" event={"ID":"b8c067c8-89e6-4c27-b894-09ea261d2033","Type":"ContainerDied","Data":"bf6e21a0aa4a1e89ff6a2d846d74ac1b44d7286a80c0cc97a7ec478e53e45bb2"} Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.330484 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf6e21a0aa4a1e89ff6a2d846d74ac1b44d7286a80c0cc97a7ec478e53e45bb2" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.330745 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-z7hfp" Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.502928 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.503433 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-log" containerID="cri-o://9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca" gracePeriod=30 Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.503505 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-api" containerID="cri-o://45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a" gracePeriod=30 Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.526884 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.527730 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d8644516-0502-4c72-8daf-954231e7d856" containerName="nova-scheduler-scheduler" containerID="cri-o://a9f9911b880c0492199d055a3b2b4e1f1e6b9942f77aa00eabb077bfbcc9bfc7" gracePeriod=30 Nov 25 15:19:01 crc kubenswrapper[4806]: I1125 15:19:01.555265 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:02 crc kubenswrapper[4806]: I1125 15:19:02.348709 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerStarted","Data":"3f427556cf187414d4eda212da953532e585203d9e599e177f1e9d54eee99022"} Nov 25 15:19:02 crc kubenswrapper[4806]: I1125 15:19:02.350791 4806 generic.go:334] "Generic (PLEG): container finished" podID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerID="9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca" exitCode=143 Nov 25 15:19:02 crc kubenswrapper[4806]: I1125 15:19:02.352178 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerDied","Data":"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca"} Nov 25 15:19:03 crc kubenswrapper[4806]: I1125 15:19:03.365294 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerStarted","Data":"87e86d907adffc3e9b7ad4fc41b0ea358b9ac1a0750161de51c6cac3a3793985"} Nov 25 15:19:03 crc kubenswrapper[4806]: I1125 15:19:03.365376 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" containerID="cri-o://8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a" gracePeriod=30 Nov 25 15:19:03 crc kubenswrapper[4806]: I1125 15:19:03.365481 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" containerID="cri-o://7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35" gracePeriod=30 Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.376969 4806 generic.go:334] "Generic (PLEG): container finished" podID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerID="8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a" exitCode=143 Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.377090 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerDied","Data":"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a"} Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.387616 4806 generic.go:334] "Generic (PLEG): container finished" podID="d8644516-0502-4c72-8daf-954231e7d856" containerID="a9f9911b880c0492199d055a3b2b4e1f1e6b9942f77aa00eabb077bfbcc9bfc7" exitCode=0 Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.388823 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8644516-0502-4c72-8daf-954231e7d856","Type":"ContainerDied","Data":"a9f9911b880c0492199d055a3b2b4e1f1e6b9942f77aa00eabb077bfbcc9bfc7"} Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.388900 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.895674 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.913103 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.559568521 podStartE2EDuration="8.913082711s" podCreationTimestamp="2025-11-25 15:18:56 +0000 UTC" firstStartedPulling="2025-11-25 15:18:57.493831844 +0000 UTC m=+1570.145974265" lastFinishedPulling="2025-11-25 15:19:02.847346044 +0000 UTC m=+1575.499488455" observedRunningTime="2025-11-25 15:19:03.403734033 +0000 UTC m=+1576.055876464" watchObservedRunningTime="2025-11-25 15:19:04.913082711 +0000 UTC m=+1577.565225132" Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.921996 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle\") pod \"d8644516-0502-4c72-8daf-954231e7d856\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.922177 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtf85\" (UniqueName: \"kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85\") pod \"d8644516-0502-4c72-8daf-954231e7d856\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.937262 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85" (OuterVolumeSpecName: "kube-api-access-wtf85") pod "d8644516-0502-4c72-8daf-954231e7d856" (UID: "d8644516-0502-4c72-8daf-954231e7d856"). InnerVolumeSpecName "kube-api-access-wtf85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:04 crc kubenswrapper[4806]: I1125 15:19:04.978557 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8644516-0502-4c72-8daf-954231e7d856" (UID: "d8644516-0502-4c72-8daf-954231e7d856"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.023422 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data\") pod \"d8644516-0502-4c72-8daf-954231e7d856\" (UID: \"d8644516-0502-4c72-8daf-954231e7d856\") " Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.023921 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtf85\" (UniqueName: \"kubernetes.io/projected/d8644516-0502-4c72-8daf-954231e7d856-kube-api-access-wtf85\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.023945 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.051296 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data" (OuterVolumeSpecName: "config-data") pod "d8644516-0502-4c72-8daf-954231e7d856" (UID: "d8644516-0502-4c72-8daf-954231e7d856"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.128986 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8644516-0502-4c72-8daf-954231e7d856-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.403125 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.410909 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d8644516-0502-4c72-8daf-954231e7d856","Type":"ContainerDied","Data":"d82f54f87897dc6b80eead71f9e351430ad97179340af2910c46f28858cbf981"} Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.410974 4806 scope.go:117] "RemoveContainer" containerID="a9f9911b880c0492199d055a3b2b4e1f1e6b9942f77aa00eabb077bfbcc9bfc7" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.477303 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.489051 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.500172 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:05 crc kubenswrapper[4806]: E1125 15:19:05.500798 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c067c8-89e6-4c27-b894-09ea261d2033" containerName="nova-manage" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.500825 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c067c8-89e6-4c27-b894-09ea261d2033" containerName="nova-manage" Nov 25 15:19:05 crc kubenswrapper[4806]: E1125 15:19:05.500884 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8644516-0502-4c72-8daf-954231e7d856" containerName="nova-scheduler-scheduler" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.500893 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8644516-0502-4c72-8daf-954231e7d856" containerName="nova-scheduler-scheduler" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.501159 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8644516-0502-4c72-8daf-954231e7d856" containerName="nova-scheduler-scheduler" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.501182 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c067c8-89e6-4c27-b894-09ea261d2033" containerName="nova-manage" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.502248 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.510796 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.512817 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.536226 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.536674 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cwff\" (UniqueName: \"kubernetes.io/projected/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-kube-api-access-8cwff\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.536801 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.639022 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cwff\" (UniqueName: \"kubernetes.io/projected/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-kube-api-access-8cwff\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.639088 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.639227 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.656549 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.656624 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-config-data\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.658670 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cwff\" (UniqueName: \"kubernetes.io/projected/e6705187-ba84-405e-9d7a-6e3b97e1b9f3-kube-api-access-8cwff\") pod \"nova-scheduler-0\" (UID: \"e6705187-ba84-405e-9d7a-6e3b97e1b9f3\") " pod="openstack/nova-scheduler-0" Nov 25 15:19:05 crc kubenswrapper[4806]: I1125 15:19:05.830543 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.112716 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8644516-0502-4c72-8daf-954231e7d856" path="/var/lib/kubelet/pods/d8644516-0502-4c72-8daf-954231e7d856/volumes" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.379613 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.399282 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:19:06 crc kubenswrapper[4806]: W1125 15:19:06.407623 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6705187_ba84_405e_9d7a_6e3b97e1b9f3.slice/crio-a3d64fdd76ff9294afd9b930d53f7773a8f7f0892c36f800bece6bd383cd5b30 WatchSource:0}: Error finding container a3d64fdd76ff9294afd9b930d53f7773a8f7f0892c36f800bece6bd383cd5b30: Status 404 returned error can't find the container with id a3d64fdd76ff9294afd9b930d53f7773a8f7f0892c36f800bece6bd383cd5b30 Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.428427 4806 generic.go:334] "Generic (PLEG): container finished" podID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerID="45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a" exitCode=0 Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.428474 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerDied","Data":"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a"} Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.428505 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c130d7c1-8c6c-4b0d-b172-64872647a752","Type":"ContainerDied","Data":"bb1c94497463bb7ca0d4b0564a3137fbb8010f4e9035b192a44f52718ac5f321"} Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.428523 4806 scope.go:117] "RemoveContainer" containerID="45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.428802 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.480890 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.480957 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.481033 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.481146 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.481247 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.481296 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98j75\" (UniqueName: \"kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75\") pod \"c130d7c1-8c6c-4b0d-b172-64872647a752\" (UID: \"c130d7c1-8c6c-4b0d-b172-64872647a752\") " Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.482014 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs" (OuterVolumeSpecName: "logs") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.486563 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75" (OuterVolumeSpecName: "kube-api-access-98j75") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "kube-api-access-98j75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.501721 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": read tcp 10.217.0.2:37154->10.217.0.220:8775: read: connection reset by peer" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.501772 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": read tcp 10.217.0.2:37166->10.217.0.220:8775: read: connection reset by peer" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.513781 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data" (OuterVolumeSpecName: "config-data") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.515050 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.550837 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.559190 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c130d7c1-8c6c-4b0d-b172-64872647a752" (UID: "c130d7c1-8c6c-4b0d-b172-64872647a752"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584060 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c130d7c1-8c6c-4b0d-b172-64872647a752-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584376 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584502 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98j75\" (UniqueName: \"kubernetes.io/projected/c130d7c1-8c6c-4b0d-b172-64872647a752-kube-api-access-98j75\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584570 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584644 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.584699 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c130d7c1-8c6c-4b0d-b172-64872647a752-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.744469 4806 scope.go:117] "RemoveContainer" containerID="9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.779838 4806 scope.go:117] "RemoveContainer" containerID="45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a" Nov 25 15:19:06 crc kubenswrapper[4806]: E1125 15:19:06.780304 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a\": container with ID starting with 45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a not found: ID does not exist" containerID="45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.780344 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a"} err="failed to get container status \"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a\": rpc error: code = NotFound desc = could not find container \"45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a\": container with ID starting with 45a2a8e32b919179e883c72953745cf7f7bcefe782197cd627a8ee03c4adcd0a not found: ID does not exist" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.780366 4806 scope.go:117] "RemoveContainer" containerID="9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca" Nov 25 15:19:06 crc kubenswrapper[4806]: E1125 15:19:06.780585 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca\": container with ID starting with 9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca not found: ID does not exist" containerID="9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.780610 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca"} err="failed to get container status \"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca\": rpc error: code = NotFound desc = could not find container \"9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca\": container with ID starting with 9787488cf5c018f56497f859a403895b4215b36c55c9c8719741ecef975df8ca not found: ID does not exist" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.784832 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.819960 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.851618 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:06 crc kubenswrapper[4806]: E1125 15:19:06.852090 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-api" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.852108 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-api" Nov 25 15:19:06 crc kubenswrapper[4806]: E1125 15:19:06.852129 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-log" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.852135 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-log" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.852376 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-log" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.852422 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" containerName="nova-api-api" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.853555 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.856506 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.856764 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.856901 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.870595 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918166 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-public-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918259 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-config-data\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918299 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918369 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ml9l\" (UniqueName: \"kubernetes.io/projected/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-kube-api-access-6ml9l\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.918511 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-logs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:06 crc kubenswrapper[4806]: I1125 15:19:06.968226 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020048 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data\") pod \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020262 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle\") pod \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020361 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs\") pod \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020411 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz8ns\" (UniqueName: \"kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns\") pod \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020462 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs\") pod \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\" (UID: \"440a9ff6-14b2-4205-bdd4-4e4861d236a9\") " Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020759 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-public-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020838 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-config-data\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020892 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.020955 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ml9l\" (UniqueName: \"kubernetes.io/projected/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-kube-api-access-6ml9l\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.021037 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.021114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-logs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.025483 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs" (OuterVolumeSpecName: "logs") pod "440a9ff6-14b2-4205-bdd4-4e4861d236a9" (UID: "440a9ff6-14b2-4205-bdd4-4e4861d236a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.026235 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-logs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.027036 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-config-data\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.027557 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-internal-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.028528 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns" (OuterVolumeSpecName: "kube-api-access-kz8ns") pod "440a9ff6-14b2-4205-bdd4-4e4861d236a9" (UID: "440a9ff6-14b2-4205-bdd4-4e4861d236a9"). InnerVolumeSpecName "kube-api-access-kz8ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.030909 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.032058 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-public-tls-certs\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.042993 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ml9l\" (UniqueName: \"kubernetes.io/projected/2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251-kube-api-access-6ml9l\") pod \"nova-api-0\" (UID: \"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251\") " pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.067010 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "440a9ff6-14b2-4205-bdd4-4e4861d236a9" (UID: "440a9ff6-14b2-4205-bdd4-4e4861d236a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.083404 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data" (OuterVolumeSpecName: "config-data") pod "440a9ff6-14b2-4205-bdd4-4e4861d236a9" (UID: "440a9ff6-14b2-4205-bdd4-4e4861d236a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.102903 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "440a9ff6-14b2-4205-bdd4-4e4861d236a9" (UID: "440a9ff6-14b2-4205-bdd4-4e4861d236a9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.123277 4806 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.123341 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz8ns\" (UniqueName: \"kubernetes.io/projected/440a9ff6-14b2-4205-bdd4-4e4861d236a9-kube-api-access-kz8ns\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.123355 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/440a9ff6-14b2-4205-bdd4-4e4861d236a9-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.123366 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.123376 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/440a9ff6-14b2-4205-bdd4-4e4861d236a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.189957 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.443497 4806 generic.go:334] "Generic (PLEG): container finished" podID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerID="7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35" exitCode=0 Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.443851 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerDied","Data":"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35"} Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.443881 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"440a9ff6-14b2-4205-bdd4-4e4861d236a9","Type":"ContainerDied","Data":"fffa68be3649e2f080a2101b7c29cdee6c0d5a23825d1ff4cce277aa6e6c1cc8"} Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.443901 4806 scope.go:117] "RemoveContainer" containerID="7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.444023 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.458920 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6705187-ba84-405e-9d7a-6e3b97e1b9f3","Type":"ContainerStarted","Data":"e4083a05ce889251b91e4c0e2e5719a737cdd5ba661575653db9715924873861"} Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.458957 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e6705187-ba84-405e-9d7a-6e3b97e1b9f3","Type":"ContainerStarted","Data":"a3d64fdd76ff9294afd9b930d53f7773a8f7f0892c36f800bece6bd383cd5b30"} Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.490588 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.490556057 podStartE2EDuration="2.490556057s" podCreationTimestamp="2025-11-25 15:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:19:07.475170676 +0000 UTC m=+1580.127313107" watchObservedRunningTime="2025-11-25 15:19:07.490556057 +0000 UTC m=+1580.142698468" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.519624 4806 scope.go:117] "RemoveContainer" containerID="8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.551725 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.566666 4806 scope.go:117] "RemoveContainer" containerID="7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35" Nov 25 15:19:07 crc kubenswrapper[4806]: E1125 15:19:07.567711 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35\": container with ID starting with 7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35 not found: ID does not exist" containerID="7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.567760 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35"} err="failed to get container status \"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35\": rpc error: code = NotFound desc = could not find container \"7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35\": container with ID starting with 7d244ac6bacfe54898c1a0aede11a32ab58c14a64144e61a89d7600ed3f6fc35 not found: ID does not exist" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.567799 4806 scope.go:117] "RemoveContainer" containerID="8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a" Nov 25 15:19:07 crc kubenswrapper[4806]: E1125 15:19:07.568292 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a\": container with ID starting with 8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a not found: ID does not exist" containerID="8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.568361 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a"} err="failed to get container status \"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a\": rpc error: code = NotFound desc = could not find container \"8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a\": container with ID starting with 8b9f233170f15daa19ac1f91e6ecefc9af17b1f6935b0f6fb3cdfce85f2c829a not found: ID does not exist" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.590492 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.606094 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:07 crc kubenswrapper[4806]: E1125 15:19:07.606862 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.606886 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" Nov 25 15:19:07 crc kubenswrapper[4806]: E1125 15:19:07.606912 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.606922 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.607207 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-metadata" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.607233 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" containerName="nova-metadata-log" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.608743 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.612102 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.612373 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.617458 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.684167 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:19:07 crc kubenswrapper[4806]: E1125 15:19:07.735702 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod440a9ff6_14b2_4205_bdd4_4e4861d236a9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod440a9ff6_14b2_4205_bdd4_4e4861d236a9.slice/crio-fffa68be3649e2f080a2101b7c29cdee6c0d5a23825d1ff4cce277aa6e6c1cc8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.743150 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a41e572-3193-4163-81ab-e3ee7b072461-logs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.743216 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-config-data\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.743416 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.743448 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnfts\" (UniqueName: \"kubernetes.io/projected/0a41e572-3193-4163-81ab-e3ee7b072461-kube-api-access-vnfts\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.743504 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.844944 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a41e572-3193-4163-81ab-e3ee7b072461-logs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.845509 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-config-data\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.845679 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a41e572-3193-4163-81ab-e3ee7b072461-logs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.845799 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.845934 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnfts\" (UniqueName: \"kubernetes.io/projected/0a41e572-3193-4163-81ab-e3ee7b072461-kube-api-access-vnfts\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.846094 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.852154 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.864194 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-config-data\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.869087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a41e572-3193-4163-81ab-e3ee7b072461-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.871279 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnfts\" (UniqueName: \"kubernetes.io/projected/0a41e572-3193-4163-81ab-e3ee7b072461-kube-api-access-vnfts\") pod \"nova-metadata-0\" (UID: \"0a41e572-3193-4163-81ab-e3ee7b072461\") " pod="openstack/nova-metadata-0" Nov 25 15:19:07 crc kubenswrapper[4806]: I1125 15:19:07.943825 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.108797 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440a9ff6-14b2-4205-bdd4-4e4861d236a9" path="/var/lib/kubelet/pods/440a9ff6-14b2-4205-bdd4-4e4861d236a9/volumes" Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.109966 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c130d7c1-8c6c-4b0d-b172-64872647a752" path="/var/lib/kubelet/pods/c130d7c1-8c6c-4b0d-b172-64872647a752/volumes" Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.435163 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.476849 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251","Type":"ContainerStarted","Data":"4e08690fd21c6afe5f1fbe5061f81d8af6d9dec43c10e6fa1175ea5aea387b64"} Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.476906 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251","Type":"ContainerStarted","Data":"3c812f590eb98c0870cf5327d886fa7248d572f56d87395325115864f1645462"} Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.476921 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251","Type":"ContainerStarted","Data":"00ec2504a21daf57465bb3177a803b97d0fca31d72f3582473be0feef45dbebb"} Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.496698 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.496679519 podStartE2EDuration="2.496679519s" podCreationTimestamp="2025-11-25 15:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:19:08.493431566 +0000 UTC m=+1581.145573987" watchObservedRunningTime="2025-11-25 15:19:08.496679519 +0000 UTC m=+1581.148821930" Nov 25 15:19:08 crc kubenswrapper[4806]: I1125 15:19:08.504553 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a41e572-3193-4163-81ab-e3ee7b072461","Type":"ContainerStarted","Data":"dcdab076549c0ce8451b086302a8d7e9f1f17ac8e8a57820cd86087d8467fdab"} Nov 25 15:19:09 crc kubenswrapper[4806]: I1125 15:19:09.516774 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a41e572-3193-4163-81ab-e3ee7b072461","Type":"ContainerStarted","Data":"62ab4b4bd04b78355031af18e2e9caafb26d5bb355f0035a8aa1a7d61a2dcc10"} Nov 25 15:19:09 crc kubenswrapper[4806]: I1125 15:19:09.517080 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a41e572-3193-4163-81ab-e3ee7b072461","Type":"ContainerStarted","Data":"b3bf921d2efbfe2945398413737237d282b1b0240463b73c7a6b5225761a07b2"} Nov 25 15:19:09 crc kubenswrapper[4806]: I1125 15:19:09.546027 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.54600309 podStartE2EDuration="2.54600309s" podCreationTimestamp="2025-11-25 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:19:09.53415691 +0000 UTC m=+1582.186299341" watchObservedRunningTime="2025-11-25 15:19:09.54600309 +0000 UTC m=+1582.198145501" Nov 25 15:19:10 crc kubenswrapper[4806]: I1125 15:19:10.831240 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:19:12 crc kubenswrapper[4806]: I1125 15:19:12.945185 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:19:12 crc kubenswrapper[4806]: I1125 15:19:12.945561 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:19:15 crc kubenswrapper[4806]: I1125 15:19:15.831960 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:19:15 crc kubenswrapper[4806]: I1125 15:19:15.863937 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:19:16 crc kubenswrapper[4806]: I1125 15:19:16.614355 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:19:17 crc kubenswrapper[4806]: I1125 15:19:17.190385 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:19:17 crc kubenswrapper[4806]: I1125 15:19:17.190768 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:19:17 crc kubenswrapper[4806]: I1125 15:19:17.944392 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:19:17 crc kubenswrapper[4806]: I1125 15:19:17.944465 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:19:18 crc kubenswrapper[4806]: E1125 15:19:18.064610 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:19:18 crc kubenswrapper[4806]: I1125 15:19:18.209741 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:19:18 crc kubenswrapper[4806]: I1125 15:19:18.209750 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.228:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:19:18 crc kubenswrapper[4806]: I1125 15:19:18.963542 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a41e572-3193-4163-81ab-e3ee7b072461" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:19:18 crc kubenswrapper[4806]: I1125 15:19:18.963559 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="0a41e572-3193-4163-81ab-e3ee7b072461" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.031438 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.199644 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.200667 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.201626 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.206756 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.787673 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.799238 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.958108 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.961804 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:19:27 crc kubenswrapper[4806]: I1125 15:19:27.965512 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:19:28 crc kubenswrapper[4806]: E1125 15:19:28.415308 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:19:28 crc kubenswrapper[4806]: I1125 15:19:28.804073 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:19:38 crc kubenswrapper[4806]: E1125 15:19:38.691844 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569f4221_7042_41a7_a783_a975cc7a02b4.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.598581 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-drlb4"] Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.608349 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-drlb4"] Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.729535 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-x7mzr"] Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.732196 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.736638 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.760520 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-x7mzr"] Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.908461 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.908514 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.908697 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w4dk\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.908757 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:39 crc kubenswrapper[4806]: I1125 15:19:39.908905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.011494 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w4dk\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.011580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.011648 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.011733 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.011765 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.019034 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.019433 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.019582 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.023080 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.030893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w4dk\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk\") pod \"cloudkitty-db-sync-x7mzr\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.072637 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.107453 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2503ad9-21ed-44c9-ae5a-25307c751865" path="/var/lib/kubelet/pods/c2503ad9-21ed-44c9-ae5a-25307c751865/volumes" Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.713035 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-x7mzr"] Nov 25 15:19:40 crc kubenswrapper[4806]: I1125 15:19:40.924589 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-x7mzr" event={"ID":"8c180594-82cd-4e18-932d-c5427040362c","Type":"ContainerStarted","Data":"d8e2c9766a212bb7e74919f9d39cb98b1c87a329a87f40286189bf43619ceefd"} Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.514100 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.690032 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.690434 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="sg-core" containerID="cri-o://3f427556cf187414d4eda212da953532e585203d9e599e177f1e9d54eee99022" gracePeriod=30 Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.690479 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-notification-agent" containerID="cri-o://30c0d6fcf97bc38ee39c890329d00d65e2051fa3740163231667201e8e7f4130" gracePeriod=30 Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.690604 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="proxy-httpd" containerID="cri-o://87e86d907adffc3e9b7ad4fc41b0ea358b9ac1a0750161de51c6cac3a3793985" gracePeriod=30 Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.690612 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-central-agent" containerID="cri-o://9ae869aefdb5ad687a31e56088a25b24056315132ad8a0122db7fd764db18842" gracePeriod=30 Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.944917 4806 generic.go:334] "Generic (PLEG): container finished" podID="770f3c08-052f-4538-a297-806acad940ef" containerID="3f427556cf187414d4eda212da953532e585203d9e599e177f1e9d54eee99022" exitCode=2 Nov 25 15:19:41 crc kubenswrapper[4806]: I1125 15:19:41.944961 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerDied","Data":"3f427556cf187414d4eda212da953532e585203d9e599e177f1e9d54eee99022"} Nov 25 15:19:42 crc kubenswrapper[4806]: I1125 15:19:42.411054 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:19:42 crc kubenswrapper[4806]: I1125 15:19:42.983228 4806 generic.go:334] "Generic (PLEG): container finished" podID="770f3c08-052f-4538-a297-806acad940ef" containerID="87e86d907adffc3e9b7ad4fc41b0ea358b9ac1a0750161de51c6cac3a3793985" exitCode=0 Nov 25 15:19:42 crc kubenswrapper[4806]: I1125 15:19:42.983595 4806 generic.go:334] "Generic (PLEG): container finished" podID="770f3c08-052f-4538-a297-806acad940ef" containerID="9ae869aefdb5ad687a31e56088a25b24056315132ad8a0122db7fd764db18842" exitCode=0 Nov 25 15:19:42 crc kubenswrapper[4806]: I1125 15:19:42.983392 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerDied","Data":"87e86d907adffc3e9b7ad4fc41b0ea358b9ac1a0750161de51c6cac3a3793985"} Nov 25 15:19:42 crc kubenswrapper[4806]: I1125 15:19:42.983642 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerDied","Data":"9ae869aefdb5ad687a31e56088a25b24056315132ad8a0122db7fd764db18842"} Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.054237 4806 generic.go:334] "Generic (PLEG): container finished" podID="770f3c08-052f-4538-a297-806acad940ef" containerID="30c0d6fcf97bc38ee39c890329d00d65e2051fa3740163231667201e8e7f4130" exitCode=0 Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.054679 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerDied","Data":"30c0d6fcf97bc38ee39c890329d00d65e2051fa3740163231667201e8e7f4130"} Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.596050 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fqjn\" (UniqueName: \"kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802511 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802596 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802728 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802789 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.802867 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.803157 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd\") pod \"770f3c08-052f-4538-a297-806acad940ef\" (UID: \"770f3c08-052f-4538-a297-806acad940ef\") " Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.806246 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.806294 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.819468 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn" (OuterVolumeSpecName: "kube-api-access-7fqjn") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "kube-api-access-7fqjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.823839 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts" (OuterVolumeSpecName: "scripts") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.855431 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.905736 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fqjn\" (UniqueName: \"kubernetes.io/projected/770f3c08-052f-4538-a297-806acad940ef-kube-api-access-7fqjn\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.905778 4806 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.905792 4806 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.905804 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.905815 4806 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/770f3c08-052f-4538-a297-806acad940ef-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.926613 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:44 crc kubenswrapper[4806]: I1125 15:19:44.959466 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.008508 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.008554 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.026042 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data" (OuterVolumeSpecName: "config-data") pod "770f3c08-052f-4538-a297-806acad940ef" (UID: "770f3c08-052f-4538-a297-806acad940ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.079749 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"770f3c08-052f-4538-a297-806acad940ef","Type":"ContainerDied","Data":"fca507c4bd8daa7a951c0b3911fcdb2732a2c26edd1c680c59d9c8dc494e458e"} Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.079804 4806 scope.go:117] "RemoveContainer" containerID="87e86d907adffc3e9b7ad4fc41b0ea358b9ac1a0750161de51c6cac3a3793985" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.079959 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.111566 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/770f3c08-052f-4538-a297-806acad940ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.145521 4806 scope.go:117] "RemoveContainer" containerID="3f427556cf187414d4eda212da953532e585203d9e599e177f1e9d54eee99022" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.145705 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.155855 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.174898 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:45 crc kubenswrapper[4806]: E1125 15:19:45.175340 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="sg-core" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175357 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="sg-core" Nov 25 15:19:45 crc kubenswrapper[4806]: E1125 15:19:45.175379 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="proxy-httpd" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175386 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="proxy-httpd" Nov 25 15:19:45 crc kubenswrapper[4806]: E1125 15:19:45.175397 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-central-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175403 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-central-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: E1125 15:19:45.175417 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-notification-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175423 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-notification-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175635 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-notification-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175650 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="sg-core" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175670 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="ceilometer-central-agent" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.175683 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="770f3c08-052f-4538-a297-806acad940ef" containerName="proxy-httpd" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.177996 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.185047 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.185268 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.186345 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.196474 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330047 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-log-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330123 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-scripts\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330160 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgkd\" (UniqueName: \"kubernetes.io/projected/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-kube-api-access-rwgkd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330209 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330252 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-run-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330271 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330343 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-config-data\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.330387 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432653 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432735 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-log-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432791 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-scripts\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432829 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwgkd\" (UniqueName: \"kubernetes.io/projected/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-kube-api-access-rwgkd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432891 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432949 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-run-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.432967 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.433028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-config-data\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.433683 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-run-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.433734 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-log-httpd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.437494 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-config-data\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.438114 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.438433 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.445213 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-scripts\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.445639 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.456022 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwgkd\" (UniqueName: \"kubernetes.io/projected/1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b-kube-api-access-rwgkd\") pod \"ceilometer-0\" (UID: \"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b\") " pod="openstack/ceilometer-0" Nov 25 15:19:45 crc kubenswrapper[4806]: I1125 15:19:45.512184 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:19:46 crc kubenswrapper[4806]: I1125 15:19:46.107400 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="770f3c08-052f-4538-a297-806acad940ef" path="/var/lib/kubelet/pods/770f3c08-052f-4538-a297-806acad940ef/volumes" Nov 25 15:19:46 crc kubenswrapper[4806]: I1125 15:19:46.305212 4806 scope.go:117] "RemoveContainer" containerID="30c0d6fcf97bc38ee39c890329d00d65e2051fa3740163231667201e8e7f4130" Nov 25 15:19:46 crc kubenswrapper[4806]: I1125 15:19:46.442133 4806 scope.go:117] "RemoveContainer" containerID="9ae869aefdb5ad687a31e56088a25b24056315132ad8a0122db7fd764db18842" Nov 25 15:19:46 crc kubenswrapper[4806]: I1125 15:19:46.947425 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:19:47 crc kubenswrapper[4806]: I1125 15:19:47.133454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b","Type":"ContainerStarted","Data":"04fa21d6265e6caf57b66a342ba2f41bbd17297fe25708d3f8136e007e59f3a2"} Nov 25 15:19:47 crc kubenswrapper[4806]: I1125 15:19:47.288819 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" containerID="cri-o://608fef6ec2b49a6ff023781e28b23752c86e3af0b3fcc1ce92cc9bc1b9b06049" gracePeriod=604795 Nov 25 15:19:47 crc kubenswrapper[4806]: I1125 15:19:47.633603 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" containerID="cri-o://695e4a23d49efd364be9c42bd1fb0bb33b0cf8672424b953cac4023374d96669" gracePeriod=604795 Nov 25 15:19:49 crc kubenswrapper[4806]: I1125 15:19:49.494408 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Nov 25 15:19:49 crc kubenswrapper[4806]: I1125 15:19:49.901698 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Nov 25 15:19:54 crc kubenswrapper[4806]: I1125 15:19:54.251182 4806 generic.go:334] "Generic (PLEG): container finished" podID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerID="695e4a23d49efd364be9c42bd1fb0bb33b0cf8672424b953cac4023374d96669" exitCode=0 Nov 25 15:19:54 crc kubenswrapper[4806]: I1125 15:19:54.252201 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerDied","Data":"695e4a23d49efd364be9c42bd1fb0bb33b0cf8672424b953cac4023374d96669"} Nov 25 15:19:54 crc kubenswrapper[4806]: I1125 15:19:54.256438 4806 generic.go:334] "Generic (PLEG): container finished" podID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerID="608fef6ec2b49a6ff023781e28b23752c86e3af0b3fcc1ce92cc9bc1b9b06049" exitCode=0 Nov 25 15:19:54 crc kubenswrapper[4806]: I1125 15:19:54.256494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerDied","Data":"608fef6ec2b49a6ff023781e28b23752c86e3af0b3fcc1ce92cc9bc1b9b06049"} Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.171881 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279524 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279599 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279797 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279871 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279919 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.279969 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.280007 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.281918 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.281980 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.282025 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wvqm\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.282091 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf\") pod \"05ade21d-01af-4a3c-a82a-83b3861244ec\" (UID: \"05ade21d-01af-4a3c-a82a-83b3861244ec\") " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.283598 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.289744 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm" (OuterVolumeSpecName: "kube-api-access-2wvqm") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "kube-api-access-2wvqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.372156 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"05ade21d-01af-4a3c-a82a-83b3861244ec","Type":"ContainerDied","Data":"4f0b4a5d435b188954a361f42a481cc89ebf35c519ba152f75bdb848356826eb"} Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.372212 4806 scope.go:117] "RemoveContainer" containerID="608fef6ec2b49a6ff023781e28b23752c86e3af0b3fcc1ce92cc9bc1b9b06049" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.372392 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.383497 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.383694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.384044 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.384592 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info" (OuterVolumeSpecName: "pod-info") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.385082 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387818 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387890 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387908 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wvqm\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-kube-api-access-2wvqm\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387924 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387978 4806 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/05ade21d-01af-4a3c-a82a-83b3861244ec-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.387990 4806 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.388000 4806 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/05ade21d-01af-4a3c-a82a-83b3861244ec-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.397572 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data" (OuterVolumeSpecName: "config-data") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.414639 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf" (OuterVolumeSpecName: "server-conf") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.417203 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68" (OuterVolumeSpecName: "persistence") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "pvc-08b47c07-8aef-45be-a189-b0c4efad5f68". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.483462 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "05ade21d-01af-4a3c-a82a-83b3861244ec" (UID: "05ade21d-01af-4a3c-a82a-83b3861244ec"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.489379 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") on node \"crc\" " Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.495584 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.495614 4806 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/05ade21d-01af-4a3c-a82a-83b3861244ec-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.495625 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/05ade21d-01af-4a3c-a82a-83b3861244ec-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.531925 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.533841 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-08b47c07-8aef-45be-a189-b0c4efad5f68" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68") on node "crc" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.597600 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.719787 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.757617 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.787201 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:20:03 crc kubenswrapper[4806]: E1125 15:20:03.788528 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.788550 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" Nov 25 15:20:03 crc kubenswrapper[4806]: E1125 15:20:03.788578 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="setup-container" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.788585 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="setup-container" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.788937 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.795076 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.797335 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nrvl8" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.798274 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.798572 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.801464 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.801777 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.801948 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.802088 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.804662 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.812996 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94eec7e9-06e0-4096-8b0e-89a012fb3495-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813080 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813102 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813144 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94eec7e9-06e0-4096-8b0e-89a012fb3495-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813165 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813193 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-config-data\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813215 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813282 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmcz6\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-kube-api-access-vmcz6\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813483 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813558 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.813590 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.915929 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916060 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916102 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94eec7e9-06e0-4096-8b0e-89a012fb3495-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916137 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916156 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916198 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94eec7e9-06e0-4096-8b0e-89a012fb3495-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916221 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916254 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-config-data\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916279 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.916358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmcz6\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-kube-api-access-vmcz6\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.917985 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-server-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.918745 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.922132 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.922171 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac4c1e236f0304110737be3b1d19c933a65d0aea2a553d5c5b453beb19db88e7/globalmount\"" pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.924142 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.924852 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/94eec7e9-06e0-4096-8b0e-89a012fb3495-pod-info\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.925336 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.925571 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94eec7e9-06e0-4096-8b0e-89a012fb3495-config-data\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.925768 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.928006 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/94eec7e9-06e0-4096-8b0e-89a012fb3495-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.933727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.934277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmcz6\" (UniqueName: \"kubernetes.io/projected/94eec7e9-06e0-4096-8b0e-89a012fb3495-kube-api-access-vmcz6\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:03 crc kubenswrapper[4806]: I1125 15:20:03.988355 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b47c07-8aef-45be-a189-b0c4efad5f68\") pod \"rabbitmq-server-0\" (UID: \"94eec7e9-06e0-4096-8b0e-89a012fb3495\") " pod="openstack/rabbitmq-server-0" Nov 25 15:20:04 crc kubenswrapper[4806]: I1125 15:20:04.119676 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:20:04 crc kubenswrapper[4806]: I1125 15:20:04.123975 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" path="/var/lib/kubelet/pods/05ade21d-01af-4a3c-a82a-83b3861244ec/volumes" Nov 25 15:20:04 crc kubenswrapper[4806]: I1125 15:20:04.494890 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="05ade21d-01af-4a3c-a82a-83b3861244ec" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: i/o timeout" Nov 25 15:20:04 crc kubenswrapper[4806]: I1125 15:20:04.902382 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: i/o timeout" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.538219 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.680963 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.681488 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.681565 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.681661 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.681749 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.681930 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.682017 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.682646 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.682735 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.682831 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-689b7\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.682875 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data\") pod \"973c8ad5-1b21-4972-94ea-d0f4323db012\" (UID: \"973c8ad5-1b21-4972-94ea-d0f4323db012\") " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.684411 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.685495 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.686188 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.692413 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.704641 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7" (OuterVolumeSpecName: "kube-api-access-689b7") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "kube-api-access-689b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.705852 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info" (OuterVolumeSpecName: "pod-info") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.712799 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892" (OuterVolumeSpecName: "persistence") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "pvc-b40b2022-ddd8-4d91-a963-363efca61892". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.723810 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.728139 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data" (OuterVolumeSpecName: "config-data") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.785857 4806 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/973c8ad5-1b21-4972-94ea-d0f4323db012-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.786452 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.786567 4806 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/973c8ad5-1b21-4972-94ea-d0f4323db012-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.786708 4806 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.786805 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.786895 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.788393 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") on node \"crc\" " Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.789707 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-689b7\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-kube-api-access-689b7\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.790016 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.791426 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf" (OuterVolumeSpecName: "server-conf") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.848035 4806 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.848956 4806 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b40b2022-ddd8-4d91-a963-363efca61892" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892") on node "crc" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.863978 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "973c8ad5-1b21-4972-94ea-d0f4323db012" (UID: "973c8ad5-1b21-4972-94ea-d0f4323db012"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.895520 4806 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/973c8ad5-1b21-4972-94ea-d0f4323db012-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.895564 4806 reconciler_common.go:293] "Volume detached for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:06 crc kubenswrapper[4806]: I1125 15:20:06.895580 4806 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/973c8ad5-1b21-4972-94ea-d0f4323db012-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.416229 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"973c8ad5-1b21-4972-94ea-d0f4323db012","Type":"ContainerDied","Data":"b1a23c2f3bd4b845252043116dfb1b54d99fd0701fd4f45f6e570b72bd07b88a"} Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.416291 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.447149 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.447618 4806 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested" Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.447780 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2w4dk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-x7mzr_openstack(8c180594-82cd-4e18-932d-c5427040362c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.449166 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-x7mzr" podUID="8c180594-82cd-4e18-932d-c5427040362c" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.530180 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.540330 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.583828 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.584683 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="setup-container" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.584704 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="setup-container" Nov 25 15:20:07 crc kubenswrapper[4806]: E1125 15:20:07.584945 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.584960 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.586487 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" containerName="rabbitmq" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.590622 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.596947 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.600643 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.600966 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.601729 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.604274 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cvks2" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.604288 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.604370 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.613278 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620363 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620436 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620516 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6sw\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-kube-api-access-pz6sw\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620808 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f89c7d3f-93e9-464e-bf10-a2df33402031-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620888 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620915 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.620950 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.621023 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.621087 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.621187 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f89c7d3f-93e9-464e-bf10-a2df33402031-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723222 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6sw\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-kube-api-access-pz6sw\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723308 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f89c7d3f-93e9-464e-bf10-a2df33402031-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723689 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723718 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723767 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.723920 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724007 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724140 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f89c7d3f-93e9-464e-bf10-a2df33402031-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724293 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724363 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724384 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724447 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.724953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.725015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.727196 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.729456 4806 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.729477 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c595192220ab734723ac28c88da4d61bccb78937c6216ae7dd707bdc8091fda/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.731007 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f89c7d3f-93e9-464e-bf10-a2df33402031-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.732353 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.737037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f89c7d3f-93e9-464e-bf10-a2df33402031-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.738052 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f89c7d3f-93e9-464e-bf10-a2df33402031-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.742251 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.743786 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6sw\" (UniqueName: \"kubernetes.io/projected/f89c7d3f-93e9-464e-bf10-a2df33402031-kube-api-access-pz6sw\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.800277 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b40b2022-ddd8-4d91-a963-363efca61892\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b40b2022-ddd8-4d91-a963-363efca61892\") pod \"rabbitmq-cell1-server-0\" (UID: \"f89c7d3f-93e9-464e-bf10-a2df33402031\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.904248 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.905991 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.908335 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.930500 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:07 crc kubenswrapper[4806]: I1125 15:20:07.931164 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.034968 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn9rp\" (UniqueName: \"kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035035 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035055 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035099 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035126 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035155 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.035190 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.112100 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="973c8ad5-1b21-4972-94ea-d0f4323db012" path="/var/lib/kubelet/pods/973c8ad5-1b21-4972-94ea-d0f4323db012/volumes" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136658 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn9rp\" (UniqueName: \"kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136739 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136769 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136825 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136858 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136895 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.136938 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.138047 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.139202 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.139968 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.140120 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.140503 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.141067 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.157682 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn9rp\" (UniqueName: \"kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp\") pod \"dnsmasq-dns-dbb88bf8c-7qt96\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: I1125 15:20:08.289905 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:08 crc kubenswrapper[4806]: E1125 15:20:08.430678 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current-tested\\\"\"" pod="openstack/cloudkitty-db-sync-x7mzr" podUID="8c180594-82cd-4e18-932d-c5427040362c" Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.036277 4806 scope.go:117] "RemoveContainer" containerID="75b09608f37c2be3772760339ed3e063996e9a92d36e7fb7ee974e5892679540" Nov 25 15:20:11 crc kubenswrapper[4806]: E1125 15:20:11.199527 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 25 15:20:11 crc kubenswrapper[4806]: E1125 15:20:11.200493 4806 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Nov 25 15:20:11 crc kubenswrapper[4806]: E1125 15:20:11.200998 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n597h57dh675h67fh576h65fh9dh5cdh556h567h55bh95h5c5hbdh5b7h645h549h5c8h5b9h59ch8ch5d7hdh64chd5h8dhdfh5c5h67ch565hb9h54q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwgkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.215587 4806 scope.go:117] "RemoveContainer" containerID="695e4a23d49efd364be9c42bd1fb0bb33b0cf8672424b953cac4023374d96669" Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.277594 4806 scope.go:117] "RemoveContainer" containerID="007c3d7c4479c3e54daabc30a491b68f01e37829f6df5622da6a3a767e77053b" Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.565876 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.673700 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:20:11 crc kubenswrapper[4806]: W1125 15:20:11.682040 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94eec7e9_06e0_4096_8b0e_89a012fb3495.slice/crio-96039b4906dfcfbaa38d06b24cf9d4f033285ba909f7b21707b8c91fe6e0224a WatchSource:0}: Error finding container 96039b4906dfcfbaa38d06b24cf9d4f033285ba909f7b21707b8c91fe6e0224a: Status 404 returned error can't find the container with id 96039b4906dfcfbaa38d06b24cf9d4f033285ba909f7b21707b8c91fe6e0224a Nov 25 15:20:11 crc kubenswrapper[4806]: I1125 15:20:11.682284 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:20:11 crc kubenswrapper[4806]: W1125 15:20:11.687487 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89c7d3f_93e9_464e_bf10_a2df33402031.slice/crio-b069f5fa2654bf7c265d47e4396616a10a3113f1df3ca87788bf3531042bd8d8 WatchSource:0}: Error finding container b069f5fa2654bf7c265d47e4396616a10a3113f1df3ca87788bf3531042bd8d8: Status 404 returned error can't find the container with id b069f5fa2654bf7c265d47e4396616a10a3113f1df3ca87788bf3531042bd8d8 Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.486905 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerID="86315f5ab0fd0c17a4d48055753a5ae418cd958b7b93fba6e071243c253c0347" exitCode=0 Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.486971 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" event={"ID":"ec61b792-1b30-485d-a10a-01f7de0074b0","Type":"ContainerDied","Data":"86315f5ab0fd0c17a4d48055753a5ae418cd958b7b93fba6e071243c253c0347"} Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.487202 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" event={"ID":"ec61b792-1b30-485d-a10a-01f7de0074b0","Type":"ContainerStarted","Data":"1deb2b375203ac1ad4145a9859f6109d49026b06a2576c7defd91ac1224fbd0f"} Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.488792 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94eec7e9-06e0-4096-8b0e-89a012fb3495","Type":"ContainerStarted","Data":"96039b4906dfcfbaa38d06b24cf9d4f033285ba909f7b21707b8c91fe6e0224a"} Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.490477 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b","Type":"ContainerStarted","Data":"fdcce4fd6784a68d0ec77d49ea449e13ce3999f4214cdef791115cb0b6e42cb2"} Nov 25 15:20:12 crc kubenswrapper[4806]: I1125 15:20:12.492996 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f89c7d3f-93e9-464e-bf10-a2df33402031","Type":"ContainerStarted","Data":"b069f5fa2654bf7c265d47e4396616a10a3113f1df3ca87788bf3531042bd8d8"} Nov 25 15:20:13 crc kubenswrapper[4806]: I1125 15:20:13.504809 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" event={"ID":"ec61b792-1b30-485d-a10a-01f7de0074b0","Type":"ContainerStarted","Data":"25e9279d6c5deec0fac35ce7696d94237465d9a13eacabe119e2eaa8cdd9efb8"} Nov 25 15:20:13 crc kubenswrapper[4806]: I1125 15:20:13.505301 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:13 crc kubenswrapper[4806]: I1125 15:20:13.509461 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b","Type":"ContainerStarted","Data":"f36ff6f49e67c44858ff26e83203a6a5b9397fee92ab54ce52c6c69c236cb1c9"} Nov 25 15:20:13 crc kubenswrapper[4806]: I1125 15:20:13.532095 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" podStartSLOduration=6.53206224 podStartE2EDuration="6.53206224s" podCreationTimestamp="2025-11-25 15:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:13.522805434 +0000 UTC m=+1646.174947875" watchObservedRunningTime="2025-11-25 15:20:13.53206224 +0000 UTC m=+1646.184204691" Nov 25 15:20:14 crc kubenswrapper[4806]: I1125 15:20:14.521345 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f89c7d3f-93e9-464e-bf10-a2df33402031","Type":"ContainerStarted","Data":"1f8119d4d549ce3fcdc8eb3603b7920f27ba62106dd5c2bbb05a7f36a495969f"} Nov 25 15:20:14 crc kubenswrapper[4806]: I1125 15:20:14.525283 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94eec7e9-06e0-4096-8b0e-89a012fb3495","Type":"ContainerStarted","Data":"58196a05be8837d4e8fe399279e73c003b28ae5bc75bad136a4e6e715a29e046"} Nov 25 15:20:15 crc kubenswrapper[4806]: E1125 15:20:15.250251 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b" Nov 25 15:20:15 crc kubenswrapper[4806]: I1125 15:20:15.349382 4806 scope.go:117] "RemoveContainer" containerID="b71e9474472d6f2e5186906b1e3ed18ae3942a9cf4b1f91e59d25c9a9cc86e36" Nov 25 15:20:15 crc kubenswrapper[4806]: I1125 15:20:15.535631 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b","Type":"ContainerStarted","Data":"39a45cfdf03e430b272a0e9e46e95f29e32387d0c4c60e6f511524493f946a99"} Nov 25 15:20:15 crc kubenswrapper[4806]: E1125 15:20:15.537913 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b" Nov 25 15:20:16 crc kubenswrapper[4806]: I1125 15:20:16.547743 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:20:16 crc kubenswrapper[4806]: E1125 15:20:16.550670 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b" Nov 25 15:20:17 crc kubenswrapper[4806]: E1125 15:20:17.564056 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.291544 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.371175 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.371502 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="dnsmasq-dns" containerID="cri-o://07d2059aa35663669eea78948442e10ca03fa26719b80bd703de3fabdabed1d6" gracePeriod=10 Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.559231 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-msc97"] Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.562366 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.588793 4806 generic.go:334] "Generic (PLEG): container finished" podID="ded52426-67c6-4765-93c7-c193a74862ec" containerID="07d2059aa35663669eea78948442e10ca03fa26719b80bd703de3fabdabed1d6" exitCode=0 Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.588844 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" event={"ID":"ded52426-67c6-4765-93c7-c193a74862ec","Type":"ContainerDied","Data":"07d2059aa35663669eea78948442e10ca03fa26719b80bd703de3fabdabed1d6"} Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.592788 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-msc97"] Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697629 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-config\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697719 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697788 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkxd\" (UniqueName: \"kubernetes.io/projected/b20c2934-99f8-4a7e-aa11-2cb645cec451-kube-api-access-xdkxd\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697831 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697855 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-svc\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.697936 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799406 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-config\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799570 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799597 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkxd\" (UniqueName: \"kubernetes.io/projected/b20c2934-99f8-4a7e-aa11-2cb645cec451-kube-api-access-xdkxd\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799657 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799690 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-svc\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.799777 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.800699 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.801204 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-config\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.801784 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.802304 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.803117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.804208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b20c2934-99f8-4a7e-aa11-2cb645cec451-dns-svc\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.832351 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkxd\" (UniqueName: \"kubernetes.io/projected/b20c2934-99f8-4a7e-aa11-2cb645cec451-kube-api-access-xdkxd\") pod \"dnsmasq-dns-85f64749dc-msc97\" (UID: \"b20c2934-99f8-4a7e-aa11-2cb645cec451\") " pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.885969 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.939518 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:20:18 crc kubenswrapper[4806]: I1125 15:20:18.939567 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.023960 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.109781 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.110202 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.110410 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.110436 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.110506 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whmp2\" (UniqueName: \"kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.110546 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config\") pod \"ded52426-67c6-4765-93c7-c193a74862ec\" (UID: \"ded52426-67c6-4765-93c7-c193a74862ec\") " Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.132259 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2" (OuterVolumeSpecName: "kube-api-access-whmp2") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "kube-api-access-whmp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.184264 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.184649 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.194434 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.205571 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.219578 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.219615 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.219629 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whmp2\" (UniqueName: \"kubernetes.io/projected/ded52426-67c6-4765-93c7-c193a74862ec-kube-api-access-whmp2\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.219643 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.219655 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.220832 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config" (OuterVolumeSpecName: "config") pod "ded52426-67c6-4765-93c7-c193a74862ec" (UID: "ded52426-67c6-4765-93c7-c193a74862ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.321544 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ded52426-67c6-4765-93c7-c193a74862ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.403649 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-msc97"] Nov 25 15:20:19 crc kubenswrapper[4806]: W1125 15:20:19.405039 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb20c2934_99f8_4a7e_aa11_2cb645cec451.slice/crio-8dc0eb535e7c07aecb7c35f907b1e760ea83299296cc61244899b06d4413fd4b WatchSource:0}: Error finding container 8dc0eb535e7c07aecb7c35f907b1e760ea83299296cc61244899b06d4413fd4b: Status 404 returned error can't find the container with id 8dc0eb535e7c07aecb7c35f907b1e760ea83299296cc61244899b06d4413fd4b Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.601924 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-msc97" event={"ID":"b20c2934-99f8-4a7e-aa11-2cb645cec451","Type":"ContainerStarted","Data":"8dc0eb535e7c07aecb7c35f907b1e760ea83299296cc61244899b06d4413fd4b"} Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.605010 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" event={"ID":"ded52426-67c6-4765-93c7-c193a74862ec","Type":"ContainerDied","Data":"3689838166d34c10a26164df99c902672cfdf93560c5457d53a7639ab0dc54d2"} Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.605069 4806 scope.go:117] "RemoveContainer" containerID="07d2059aa35663669eea78948442e10ca03fa26719b80bd703de3fabdabed1d6" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.605090 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-h9svs" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.650688 4806 scope.go:117] "RemoveContainer" containerID="16879783e7bd6ada607271c4d7827261f99811ee8b1ef9287ac60480176f870e" Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.660002 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:20:19 crc kubenswrapper[4806]: I1125 15:20:19.675130 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-h9svs"] Nov 25 15:20:19 crc kubenswrapper[4806]: E1125 15:20:19.893833 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb20c2934_99f8_4a7e_aa11_2cb645cec451.slice/crio-aedfacacb18f2ad3f3f8e49c93f38565df2df61b127ee5331040836355daa8bf.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:20:20 crc kubenswrapper[4806]: I1125 15:20:20.101516 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ded52426-67c6-4765-93c7-c193a74862ec" path="/var/lib/kubelet/pods/ded52426-67c6-4765-93c7-c193a74862ec/volumes" Nov 25 15:20:20 crc kubenswrapper[4806]: I1125 15:20:20.620471 4806 generic.go:334] "Generic (PLEG): container finished" podID="b20c2934-99f8-4a7e-aa11-2cb645cec451" containerID="aedfacacb18f2ad3f3f8e49c93f38565df2df61b127ee5331040836355daa8bf" exitCode=0 Nov 25 15:20:20 crc kubenswrapper[4806]: I1125 15:20:20.620514 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-msc97" event={"ID":"b20c2934-99f8-4a7e-aa11-2cb645cec451","Type":"ContainerDied","Data":"aedfacacb18f2ad3f3f8e49c93f38565df2df61b127ee5331040836355daa8bf"} Nov 25 15:20:20 crc kubenswrapper[4806]: I1125 15:20:20.913227 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:20:21 crc kubenswrapper[4806]: I1125 15:20:21.632130 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-msc97" event={"ID":"b20c2934-99f8-4a7e-aa11-2cb645cec451","Type":"ContainerStarted","Data":"27f4a8789685b6669004b08d9fd3413d2e327bbdf119e66eefdca1747293323a"} Nov 25 15:20:21 crc kubenswrapper[4806]: I1125 15:20:21.632626 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:21 crc kubenswrapper[4806]: I1125 15:20:21.633822 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-x7mzr" event={"ID":"8c180594-82cd-4e18-932d-c5427040362c","Type":"ContainerStarted","Data":"35326fe0bfbfc0029635f575b6261d37eae34b70d75a24cc28e3d756f8c7383c"} Nov 25 15:20:21 crc kubenswrapper[4806]: I1125 15:20:21.667620 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-msc97" podStartSLOduration=3.6676008319999998 podStartE2EDuration="3.667600832s" podCreationTimestamp="2025-11-25 15:20:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:21.655209257 +0000 UTC m=+1654.307351678" watchObservedRunningTime="2025-11-25 15:20:21.667600832 +0000 UTC m=+1654.319743243" Nov 25 15:20:21 crc kubenswrapper[4806]: I1125 15:20:21.686420 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-x7mzr" podStartSLOduration=2.482245799 podStartE2EDuration="42.686396362s" podCreationTimestamp="2025-11-25 15:19:39 +0000 UTC" firstStartedPulling="2025-11-25 15:19:40.706219859 +0000 UTC m=+1613.358362270" lastFinishedPulling="2025-11-25 15:20:20.910370422 +0000 UTC m=+1653.562512833" observedRunningTime="2025-11-25 15:20:21.68181124 +0000 UTC m=+1654.333953681" watchObservedRunningTime="2025-11-25 15:20:21.686396362 +0000 UTC m=+1654.338538783" Nov 25 15:20:23 crc kubenswrapper[4806]: I1125 15:20:23.660776 4806 generic.go:334] "Generic (PLEG): container finished" podID="8c180594-82cd-4e18-932d-c5427040362c" containerID="35326fe0bfbfc0029635f575b6261d37eae34b70d75a24cc28e3d756f8c7383c" exitCode=0 Nov 25 15:20:23 crc kubenswrapper[4806]: I1125 15:20:23.660862 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-x7mzr" event={"ID":"8c180594-82cd-4e18-932d-c5427040362c","Type":"ContainerDied","Data":"35326fe0bfbfc0029635f575b6261d37eae34b70d75a24cc28e3d756f8c7383c"} Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.017808 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.155147 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w4dk\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk\") pod \"8c180594-82cd-4e18-932d-c5427040362c\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.155423 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle\") pod \"8c180594-82cd-4e18-932d-c5427040362c\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.155503 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data\") pod \"8c180594-82cd-4e18-932d-c5427040362c\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.155552 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts\") pod \"8c180594-82cd-4e18-932d-c5427040362c\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.155571 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs\") pod \"8c180594-82cd-4e18-932d-c5427040362c\" (UID: \"8c180594-82cd-4e18-932d-c5427040362c\") " Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.161968 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts" (OuterVolumeSpecName: "scripts") pod "8c180594-82cd-4e18-932d-c5427040362c" (UID: "8c180594-82cd-4e18-932d-c5427040362c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.163482 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk" (OuterVolumeSpecName: "kube-api-access-2w4dk") pod "8c180594-82cd-4e18-932d-c5427040362c" (UID: "8c180594-82cd-4e18-932d-c5427040362c"). InnerVolumeSpecName "kube-api-access-2w4dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.167630 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs" (OuterVolumeSpecName: "certs") pod "8c180594-82cd-4e18-932d-c5427040362c" (UID: "8c180594-82cd-4e18-932d-c5427040362c"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.186808 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data" (OuterVolumeSpecName: "config-data") pod "8c180594-82cd-4e18-932d-c5427040362c" (UID: "8c180594-82cd-4e18-932d-c5427040362c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.188581 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c180594-82cd-4e18-932d-c5427040362c" (UID: "8c180594-82cd-4e18-932d-c5427040362c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.258355 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.258404 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.258420 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.258439 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w4dk\" (UniqueName: \"kubernetes.io/projected/8c180594-82cd-4e18-932d-c5427040362c-kube-api-access-2w4dk\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.258460 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c180594-82cd-4e18-932d-c5427040362c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.682775 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-x7mzr" event={"ID":"8c180594-82cd-4e18-932d-c5427040362c","Type":"ContainerDied","Data":"d8e2c9766a212bb7e74919f9d39cb98b1c87a329a87f40286189bf43619ceefd"} Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.682823 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8e2c9766a212bb7e74919f9d39cb98b1c87a329a87f40286189bf43619ceefd" Nov 25 15:20:25 crc kubenswrapper[4806]: I1125 15:20:25.682835 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-x7mzr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.102213 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-khx7z"] Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.112105 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-khx7z"] Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202005 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-l59xr"] Nov 25 15:20:26 crc kubenswrapper[4806]: E1125 15:20:26.202555 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c180594-82cd-4e18-932d-c5427040362c" containerName="cloudkitty-db-sync" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202578 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c180594-82cd-4e18-932d-c5427040362c" containerName="cloudkitty-db-sync" Nov 25 15:20:26 crc kubenswrapper[4806]: E1125 15:20:26.202600 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="init" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202610 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="init" Nov 25 15:20:26 crc kubenswrapper[4806]: E1125 15:20:26.202629 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="dnsmasq-dns" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202638 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="dnsmasq-dns" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202868 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ded52426-67c6-4765-93c7-c193a74862ec" containerName="dnsmasq-dns" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.202894 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c180594-82cd-4e18-932d-c5427040362c" containerName="cloudkitty-db-sync" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.205169 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.207153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.217907 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-l59xr"] Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.279856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.280207 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.280277 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.280394 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk84n\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.280421 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.382869 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk84n\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.383204 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.383390 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.383541 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.383629 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.387147 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.387750 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.389015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.393180 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.402278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk84n\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n\") pod \"cloudkitty-storageinit-l59xr\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.531436 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.754485 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.758994 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.775089 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.895470 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.895681 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.895913 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swtk\" (UniqueName: \"kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.998869 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swtk\" (UniqueName: \"kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.998966 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.999063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.999593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:26 crc kubenswrapper[4806]: I1125 15:20:26.999653 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.023420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swtk\" (UniqueName: \"kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk\") pod \"community-operators-47fxr\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.036905 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-l59xr"] Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.086885 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.711171 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-l59xr" event={"ID":"fa2b367f-df6a-4648-9ed2-e3d1d4a72493","Type":"ContainerStarted","Data":"90ddb67e72a52d632d3f29a549cc1cf6282f72c02eb024fc3b25ca819a101978"} Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.711732 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-l59xr" event={"ID":"fa2b367f-df6a-4648-9ed2-e3d1d4a72493","Type":"ContainerStarted","Data":"f1db79d898e8b425d423e4c5d450169bdcf8d3e75583a20640a5c3d5ff91c45f"} Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.724853 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerStarted","Data":"3e3867f1c2f648a756ff47ca928d5a51b8e0d7603303151c5a6450993c4f8533"} Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.731999 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:27 crc kubenswrapper[4806]: I1125 15:20:27.742274 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-l59xr" podStartSLOduration=1.742232938 podStartE2EDuration="1.742232938s" podCreationTimestamp="2025-11-25 15:20:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:27.727619019 +0000 UTC m=+1660.379761440" watchObservedRunningTime="2025-11-25 15:20:27.742232938 +0000 UTC m=+1660.394375349" Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.102824 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aaf07d8-e5c5-4119-9d4a-df8d6c296541" path="/var/lib/kubelet/pods/7aaf07d8-e5c5-4119-9d4a-df8d6c296541/volumes" Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.730047 4806 generic.go:334] "Generic (PLEG): container finished" podID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerID="a145f771e312bdaf6f03bd6813f870469f624805ed2704e8802dde7cb6c091e0" exitCode=0 Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.730179 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerDied","Data":"a145f771e312bdaf6f03bd6813f870469f624805ed2704e8802dde7cb6c091e0"} Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.887449 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-msc97" Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.957152 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:28 crc kubenswrapper[4806]: I1125 15:20:28.960123 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.010568 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.026568 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.026875 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="dnsmasq-dns" containerID="cri-o://25e9279d6c5deec0fac35ce7696d94237465d9a13eacabe119e2eaa8cdd9efb8" gracePeriod=10 Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.051545 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.051637 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.051763 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdw8g\" (UniqueName: \"kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.155500 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdw8g\" (UniqueName: \"kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.156013 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.156602 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.156657 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.157978 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.177451 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdw8g\" (UniqueName: \"kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g\") pod \"certified-operators-7vzjq\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.293082 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:29 crc kubenswrapper[4806]: W1125 15:20:29.777868 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe294208_726c_429d_a144_74fd096f1a63.slice/crio-b955fc3f90914e8896aa401932eceb5215d4034c70023b8df27b6430524b436e WatchSource:0}: Error finding container b955fc3f90914e8896aa401932eceb5215d4034c70023b8df27b6430524b436e: Status 404 returned error can't find the container with id b955fc3f90914e8896aa401932eceb5215d4034c70023b8df27b6430524b436e Nov 25 15:20:29 crc kubenswrapper[4806]: I1125 15:20:29.779049 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.750813 4806 generic.go:334] "Generic (PLEG): container finished" podID="fa2b367f-df6a-4648-9ed2-e3d1d4a72493" containerID="90ddb67e72a52d632d3f29a549cc1cf6282f72c02eb024fc3b25ca819a101978" exitCode=0 Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.750892 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-l59xr" event={"ID":"fa2b367f-df6a-4648-9ed2-e3d1d4a72493","Type":"ContainerDied","Data":"90ddb67e72a52d632d3f29a549cc1cf6282f72c02eb024fc3b25ca819a101978"} Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.753856 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerID="25e9279d6c5deec0fac35ce7696d94237465d9a13eacabe119e2eaa8cdd9efb8" exitCode=0 Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.753976 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" event={"ID":"ec61b792-1b30-485d-a10a-01f7de0074b0","Type":"ContainerDied","Data":"25e9279d6c5deec0fac35ce7696d94237465d9a13eacabe119e2eaa8cdd9efb8"} Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.756095 4806 generic.go:334] "Generic (PLEG): container finished" podID="fe294208-726c-429d-a144-74fd096f1a63" containerID="aebcd6405f04a3dd4553a0f3cf0bfef49cde024b4c1e6f459d255a20df5369d4" exitCode=0 Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.756130 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerDied","Data":"aebcd6405f04a3dd4553a0f3cf0bfef49cde024b4c1e6f459d255a20df5369d4"} Nov 25 15:20:30 crc kubenswrapper[4806]: I1125 15:20:30.756150 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerStarted","Data":"b955fc3f90914e8896aa401932eceb5215d4034c70023b8df27b6430524b436e"} Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.136522 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206306 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206419 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206496 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206580 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206633 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206700 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn9rp\" (UniqueName: \"kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.206834 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam\") pod \"ec61b792-1b30-485d-a10a-01f7de0074b0\" (UID: \"ec61b792-1b30-485d-a10a-01f7de0074b0\") " Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.216373 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp" (OuterVolumeSpecName: "kube-api-access-sn9rp") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "kube-api-access-sn9rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.262104 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.263707 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.265703 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.269349 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.274449 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.294028 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config" (OuterVolumeSpecName: "config") pod "ec61b792-1b30-485d-a10a-01f7de0074b0" (UID: "ec61b792-1b30-485d-a10a-01f7de0074b0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309694 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309743 4806 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309757 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309770 4806 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309979 4806 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.309991 4806 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec61b792-1b30-485d-a10a-01f7de0074b0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.310000 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn9rp\" (UniqueName: \"kubernetes.io/projected/ec61b792-1b30-485d-a10a-01f7de0074b0-kube-api-access-sn9rp\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.768716 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" event={"ID":"ec61b792-1b30-485d-a10a-01f7de0074b0","Type":"ContainerDied","Data":"1deb2b375203ac1ad4145a9859f6109d49026b06a2576c7defd91ac1224fbd0f"} Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.768770 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-7qt96" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.769120 4806 scope.go:117] "RemoveContainer" containerID="25e9279d6c5deec0fac35ce7696d94237465d9a13eacabe119e2eaa8cdd9efb8" Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.810535 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:31 crc kubenswrapper[4806]: I1125 15:20:31.821008 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-7qt96"] Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.101593 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" path="/var/lib/kubelet/pods/ec61b792-1b30-485d-a10a-01f7de0074b0/volumes" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.502588 4806 scope.go:117] "RemoveContainer" containerID="86315f5ab0fd0c17a4d48055753a5ae418cd958b7b93fba6e071243c253c0347" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.697533 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.741800 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk84n\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n\") pod \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.742100 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts\") pod \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.742154 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs\") pod \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.742172 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle\") pod \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.742193 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data\") pod \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\" (UID: \"fa2b367f-df6a-4648-9ed2-e3d1d4a72493\") " Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.747667 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs" (OuterVolumeSpecName: "certs") pod "fa2b367f-df6a-4648-9ed2-e3d1d4a72493" (UID: "fa2b367f-df6a-4648-9ed2-e3d1d4a72493"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.753562 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts" (OuterVolumeSpecName: "scripts") pod "fa2b367f-df6a-4648-9ed2-e3d1d4a72493" (UID: "fa2b367f-df6a-4648-9ed2-e3d1d4a72493"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.763640 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n" (OuterVolumeSpecName: "kube-api-access-pk84n") pod "fa2b367f-df6a-4648-9ed2-e3d1d4a72493" (UID: "fa2b367f-df6a-4648-9ed2-e3d1d4a72493"). InnerVolumeSpecName "kube-api-access-pk84n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.780006 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data" (OuterVolumeSpecName: "config-data") pod "fa2b367f-df6a-4648-9ed2-e3d1d4a72493" (UID: "fa2b367f-df6a-4648-9ed2-e3d1d4a72493"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.785340 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-l59xr" event={"ID":"fa2b367f-df6a-4648-9ed2-e3d1d4a72493","Type":"ContainerDied","Data":"f1db79d898e8b425d423e4c5d450169bdcf8d3e75583a20640a5c3d5ff91c45f"} Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.785372 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-l59xr" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.785398 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1db79d898e8b425d423e4c5d450169bdcf8d3e75583a20640a5c3d5ff91c45f" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.793184 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa2b367f-df6a-4648-9ed2-e3d1d4a72493" (UID: "fa2b367f-df6a-4648-9ed2-e3d1d4a72493"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.844192 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.844518 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.844528 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.844537 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.844548 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk84n\" (UniqueName: \"kubernetes.io/projected/fa2b367f-df6a-4648-9ed2-e3d1d4a72493-kube-api-access-pk84n\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.897260 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.897577 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="7f3d1e2e-c63c-4c46-828b-189248646880" containerName="cloudkitty-proc" containerID="cri-o://0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30" gracePeriod=30 Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.911757 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.912015 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api-log" containerID="cri-o://061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c" gracePeriod=30 Nov 25 15:20:32 crc kubenswrapper[4806]: I1125 15:20:32.912104 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" containerID="cri-o://72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2" gracePeriod=30 Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.109975 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.807967 4806 generic.go:334] "Generic (PLEG): container finished" podID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerID="61b028df0e12751f727cd2f0a3d9532d59b7c4e21152e9192612c8428b641816" exitCode=0 Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.808490 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerDied","Data":"61b028df0e12751f727cd2f0a3d9532d59b7c4e21152e9192612c8428b641816"} Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.810832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerStarted","Data":"4b5dbda6d2cf3976511d3debe457f550fdffbe4767da155506011143b702c9c8"} Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.829903 4806 generic.go:334] "Generic (PLEG): container finished" podID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerID="061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c" exitCode=143 Nov 25 15:20:33 crc kubenswrapper[4806]: I1125 15:20:33.829986 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerDied","Data":"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.198109 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-api-0" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" probeResult="failure" output="Get \"https://10.217.0.193:8889/healthcheck\": dial tcp 10.217.0.193:8889: connect: connection refused" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.644373 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.685901 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.685993 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.686086 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.686246 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dwnr\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.686268 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.686301 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts\") pod \"7f3d1e2e-c63c-4c46-828b-189248646880\" (UID: \"7f3d1e2e-c63c-4c46-828b-189248646880\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.692263 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs" (OuterVolumeSpecName: "certs") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.694835 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts" (OuterVolumeSpecName: "scripts") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.695190 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr" (OuterVolumeSpecName: "kube-api-access-4dwnr") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "kube-api-access-4dwnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.703654 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.723108 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data" (OuterVolumeSpecName: "config-data") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.728750 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f3d1e2e-c63c-4c46-828b-189248646880" (UID: "7f3d1e2e-c63c-4c46-828b-189248646880"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792453 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dwnr\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-kube-api-access-4dwnr\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792512 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792526 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792537 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792550 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/7f3d1e2e-c63c-4c46-828b-189248646880-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.792562 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3d1e2e-c63c-4c46-828b-189248646880-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.816647 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.843432 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b","Type":"ContainerStarted","Data":"bfef7ef26a4e30899870c0ce7d6cd290bd0095c01191a7d3477696cadf1c6e3f"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.846174 4806 generic.go:334] "Generic (PLEG): container finished" podID="fe294208-726c-429d-a144-74fd096f1a63" containerID="4b5dbda6d2cf3976511d3debe457f550fdffbe4767da155506011143b702c9c8" exitCode=0 Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.846237 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerDied","Data":"4b5dbda6d2cf3976511d3debe457f550fdffbe4767da155506011143b702c9c8"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.851182 4806 generic.go:334] "Generic (PLEG): container finished" podID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerID="72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2" exitCode=0 Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.851232 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerDied","Data":"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.851265 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.851280 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"9b9283d4-b401-4efa-b2f0-d14c8b44cf21","Type":"ContainerDied","Data":"38da8d2f66db7400a4866ae9d4134ebc75fcfc3338975c36fef21af4d6b2ebbe"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.851302 4806 scope.go:117] "RemoveContainer" containerID="72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.854415 4806 generic.go:334] "Generic (PLEG): container finished" podID="7f3d1e2e-c63c-4c46-828b-189248646880" containerID="0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30" exitCode=0 Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.854959 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"7f3d1e2e-c63c-4c46-828b-189248646880","Type":"ContainerDied","Data":"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.855013 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"7f3d1e2e-c63c-4c46-828b-189248646880","Type":"ContainerDied","Data":"955a95f955b77f47e437680d1fed8efd98528d13f3c2d84ed08334017b2c8620"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.855074 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.872376 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerStarted","Data":"588f38cedadf18ad860e4f37eb203f872fe48423f61f8af030c3826a3778b127"} Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.893231 4806 scope.go:117] "RemoveContainer" containerID="061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894301 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894412 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj8gn\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894657 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894696 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894724 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894774 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894799 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894840 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.894867 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom\") pod \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\" (UID: \"9b9283d4-b401-4efa-b2f0-d14c8b44cf21\") " Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.903046 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs" (OuterVolumeSpecName: "certs") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.907286 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.907565 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.636275033 podStartE2EDuration="49.907543518s" podCreationTimestamp="2025-11-25 15:19:45 +0000 UTC" firstStartedPulling="2025-11-25 15:19:46.95570545 +0000 UTC m=+1619.607847861" lastFinishedPulling="2025-11-25 15:20:34.226973935 +0000 UTC m=+1666.879116346" observedRunningTime="2025-11-25 15:20:34.882422577 +0000 UTC m=+1667.534564988" watchObservedRunningTime="2025-11-25 15:20:34.907543518 +0000 UTC m=+1667.559685949" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.909965 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs" (OuterVolumeSpecName: "logs") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.918580 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts" (OuterVolumeSpecName: "scripts") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.933733 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn" (OuterVolumeSpecName: "kube-api-access-qj8gn") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "kube-api-access-qj8gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.965914 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47fxr" podStartSLOduration=3.331569469 podStartE2EDuration="8.965887491s" podCreationTimestamp="2025-11-25 15:20:26 +0000 UTC" firstStartedPulling="2025-11-25 15:20:28.731928098 +0000 UTC m=+1661.384070509" lastFinishedPulling="2025-11-25 15:20:34.36624611 +0000 UTC m=+1667.018388531" observedRunningTime="2025-11-25 15:20:34.915016442 +0000 UTC m=+1667.567158853" watchObservedRunningTime="2025-11-25 15:20:34.965887491 +0000 UTC m=+1667.618029902" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.983694 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.995301 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998604 4806 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998665 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj8gn\" (UniqueName: \"kubernetes.io/projected/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-kube-api-access-qj8gn\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998681 4806 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998692 4806 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998707 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:34 crc kubenswrapper[4806]: I1125 15:20:34.998719 4806 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.005942 4806 scope.go:117] "RemoveContainer" containerID="72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.006931 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.007145 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2\": container with ID starting with 72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2 not found: ID does not exist" containerID="72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.007184 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2"} err="failed to get container status \"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2\": rpc error: code = NotFound desc = could not find container \"72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2\": container with ID starting with 72bcd53b1541868263c430d835347a7719546fd3980d138a847d6545e0b454b2 not found: ID does not exist" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.007207 4806 scope.go:117] "RemoveContainer" containerID="061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.007142 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data" (OuterVolumeSpecName: "config-data") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.007813 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c\": container with ID starting with 061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c not found: ID does not exist" containerID="061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.007838 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c"} err="failed to get container status \"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c\": rpc error: code = NotFound desc = could not find container \"061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c\": container with ID starting with 061432f969d196d6d3241f1e507b6c98530bec97bc8b0f2adbac7d7c3f6c3b2c not found: ID does not exist" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.007852 4806 scope.go:117] "RemoveContainer" containerID="0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.031025 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033021 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3d1e2e-c63c-4c46-828b-189248646880" containerName="cloudkitty-proc" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033048 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3d1e2e-c63c-4c46-828b-189248646880" containerName="cloudkitty-proc" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033064 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="init" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033072 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="init" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033085 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api-log" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033093 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api-log" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033111 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2b367f-df6a-4648-9ed2-e3d1d4a72493" containerName="cloudkitty-storageinit" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033117 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2b367f-df6a-4648-9ed2-e3d1d4a72493" containerName="cloudkitty-storageinit" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033131 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033138 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.033166 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="dnsmasq-dns" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033174 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="dnsmasq-dns" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033474 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api-log" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033496 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3d1e2e-c63c-4c46-828b-189248646880" containerName="cloudkitty-proc" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033510 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2b367f-df6a-4648-9ed2-e3d1d4a72493" containerName="cloudkitty-storageinit" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033530 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" containerName="cloudkitty-api" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.033545 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec61b792-1b30-485d-a10a-01f7de0074b0" containerName="dnsmasq-dns" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.037612 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.046568 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.046698 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.050877 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.072693 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b9283d4-b401-4efa-b2f0-d14c8b44cf21" (UID: "9b9283d4-b401-4efa-b2f0-d14c8b44cf21"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.081708 4806 scope.go:117] "RemoveContainer" containerID="0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30" Nov 25 15:20:35 crc kubenswrapper[4806]: E1125 15:20:35.082738 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30\": container with ID starting with 0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30 not found: ID does not exist" containerID="0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.082790 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30"} err="failed to get container status \"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30\": rpc error: code = NotFound desc = could not find container \"0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30\": container with ID starting with 0cf98467b5a1106cd4c7ee203f7c43333d3037059f39aab1efc736b31aadce30 not found: ID does not exist" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.100722 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-certs\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.100766 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.100834 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-scripts\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.100893 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnqr\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-kube-api-access-kfnqr\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.101133 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.101406 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.101553 4806 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.101564 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.101573 4806 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b9283d4-b401-4efa-b2f0-d14c8b44cf21-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.195888 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.203334 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.203436 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-certs\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.203460 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.203518 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-scripts\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.203586 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnqr\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-kube-api-access-kfnqr\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.204048 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.208302 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.209762 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-certs\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.211873 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.211983 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.222593 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.234058 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69ec7b50-f06b-4a12-8c24-8781116d0604-scripts\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.235823 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnqr\" (UniqueName: \"kubernetes.io/projected/69ec7b50-f06b-4a12-8c24-8781116d0604-kube-api-access-kfnqr\") pod \"cloudkitty-proc-0\" (UID: \"69ec7b50-f06b-4a12-8c24-8781116d0604\") " pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.236996 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.239232 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.245280 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.246008 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.246750 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.248957 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.305692 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.305736 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkb7q\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-kube-api-access-mkb7q\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.306560 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.306764 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.306834 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.306867 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-scripts\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.306976 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.307016 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.307132 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e447777b-718e-4152-a9ac-9f6d8885345f-logs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.365742 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409064 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409439 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e447777b-718e-4152-a9ac-9f6d8885345f-logs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409512 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409532 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkb7q\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-kube-api-access-mkb7q\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409562 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409606 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409637 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409655 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-scripts\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.409774 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.411388 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e447777b-718e-4152-a9ac-9f6d8885345f-logs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.414175 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.415108 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.415465 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-scripts\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.416145 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.421401 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.422843 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.428248 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e447777b-718e-4152-a9ac-9f6d8885345f-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.441108 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkb7q\" (UniqueName: \"kubernetes.io/projected/e447777b-718e-4152-a9ac-9f6d8885345f-kube-api-access-mkb7q\") pod \"cloudkitty-api-0\" (UID: \"e447777b-718e-4152-a9ac-9f6d8885345f\") " pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.573825 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Nov 25 15:20:35 crc kubenswrapper[4806]: I1125 15:20:35.875529 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Nov 25 15:20:35 crc kubenswrapper[4806]: W1125 15:20:35.887004 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ec7b50_f06b_4a12_8c24_8781116d0604.slice/crio-40ced71ede71de0b31c735451c7ca26f3573eff90e31d764a409687ca727bc1e WatchSource:0}: Error finding container 40ced71ede71de0b31c735451c7ca26f3573eff90e31d764a409687ca727bc1e: Status 404 returned error can't find the container with id 40ced71ede71de0b31c735451c7ca26f3573eff90e31d764a409687ca727bc1e Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.118070 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3d1e2e-c63c-4c46-828b-189248646880" path="/var/lib/kubelet/pods/7f3d1e2e-c63c-4c46-828b-189248646880/volumes" Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.118924 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9283d4-b401-4efa-b2f0-d14c8b44cf21" path="/var/lib/kubelet/pods/9b9283d4-b401-4efa-b2f0-d14c8b44cf21/volumes" Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.194019 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.914071 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e447777b-718e-4152-a9ac-9f6d8885345f","Type":"ContainerStarted","Data":"a4d4bcbe684de097e7b7ec64db8fac938bf894f49482b3857e6833967ff9c7d9"} Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.916399 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e447777b-718e-4152-a9ac-9f6d8885345f","Type":"ContainerStarted","Data":"cf9445ce58c7418d4be250abdfec038d9be12b143f7af4af821b1e146edca3ff"} Nov 25 15:20:36 crc kubenswrapper[4806]: I1125 15:20:36.917688 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"69ec7b50-f06b-4a12-8c24-8781116d0604","Type":"ContainerStarted","Data":"40ced71ede71de0b31c735451c7ca26f3573eff90e31d764a409687ca727bc1e"} Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.087375 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.087764 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.137525 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.929675 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"e447777b-718e-4152-a9ac-9f6d8885345f","Type":"ContainerStarted","Data":"881fd51ca68eaf24c940e7907cf0371aa4334f11af5aa5d444cb821464420cc6"} Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.930118 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.932244 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerStarted","Data":"da8db0c3afe85e4320e3d5cad4d8333df7360c34e5a30a3ac246c0467d476761"} Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.955182 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.9551661510000002 podStartE2EDuration="2.955166151s" podCreationTimestamp="2025-11-25 15:20:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:37.947222693 +0000 UTC m=+1670.599365104" watchObservedRunningTime="2025-11-25 15:20:37.955166151 +0000 UTC m=+1670.607308562" Nov 25 15:20:37 crc kubenswrapper[4806]: I1125 15:20:37.976878 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7vzjq" podStartSLOduration=3.180226038 podStartE2EDuration="9.976857413s" podCreationTimestamp="2025-11-25 15:20:28 +0000 UTC" firstStartedPulling="2025-11-25 15:20:30.757925154 +0000 UTC m=+1663.410067565" lastFinishedPulling="2025-11-25 15:20:37.554556509 +0000 UTC m=+1670.206698940" observedRunningTime="2025-11-25 15:20:37.972782476 +0000 UTC m=+1670.624924917" watchObservedRunningTime="2025-11-25 15:20:37.976857413 +0000 UTC m=+1670.628999814" Nov 25 15:20:39 crc kubenswrapper[4806]: I1125 15:20:39.293483 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:39 crc kubenswrapper[4806]: I1125 15:20:39.293784 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:39 crc kubenswrapper[4806]: I1125 15:20:39.353691 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:40 crc kubenswrapper[4806]: I1125 15:20:40.968165 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"69ec7b50-f06b-4a12-8c24-8781116d0604","Type":"ContainerStarted","Data":"2c3f818cb89fbd4876334f74925bcc88d278ae84ea568144344bf1647d9a6b84"} Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.024127 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5"] Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.025812 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.028422 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.028432 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.028661 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.030230 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.033997 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=4.343411229 podStartE2EDuration="8.033968694s" podCreationTimestamp="2025-11-25 15:20:34 +0000 UTC" firstStartedPulling="2025-11-25 15:20:35.891274667 +0000 UTC m=+1668.543417078" lastFinishedPulling="2025-11-25 15:20:39.581832132 +0000 UTC m=+1672.233974543" observedRunningTime="2025-11-25 15:20:42.010610904 +0000 UTC m=+1674.662753315" watchObservedRunningTime="2025-11-25 15:20:42.033968694 +0000 UTC m=+1674.686111105" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.061093 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5"] Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.079949 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.080069 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.080159 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pms\" (UniqueName: \"kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.080216 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.182246 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.182371 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7pms\" (UniqueName: \"kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.182422 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.182652 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.188032 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.188820 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.189509 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.201131 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7pms\" (UniqueName: \"kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:42 crc kubenswrapper[4806]: I1125 15:20:42.347951 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:20:46 crc kubenswrapper[4806]: I1125 15:20:46.052258 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94eec7e9-06e0-4096-8b0e-89a012fb3495","Type":"ContainerDied","Data":"58196a05be8837d4e8fe399279e73c003b28ae5bc75bad136a4e6e715a29e046"} Nov 25 15:20:46 crc kubenswrapper[4806]: I1125 15:20:46.052284 4806 generic.go:334] "Generic (PLEG): container finished" podID="94eec7e9-06e0-4096-8b0e-89a012fb3495" containerID="58196a05be8837d4e8fe399279e73c003b28ae5bc75bad136a4e6e715a29e046" exitCode=0 Nov 25 15:20:47 crc kubenswrapper[4806]: I1125 15:20:47.080084 4806 generic.go:334] "Generic (PLEG): container finished" podID="f89c7d3f-93e9-464e-bf10-a2df33402031" containerID="1f8119d4d549ce3fcdc8eb3603b7920f27ba62106dd5c2bbb05a7f36a495969f" exitCode=0 Nov 25 15:20:47 crc kubenswrapper[4806]: I1125 15:20:47.080190 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f89c7d3f-93e9-464e-bf10-a2df33402031","Type":"ContainerDied","Data":"1f8119d4d549ce3fcdc8eb3603b7920f27ba62106dd5c2bbb05a7f36a495969f"} Nov 25 15:20:47 crc kubenswrapper[4806]: I1125 15:20:47.167331 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:47 crc kubenswrapper[4806]: I1125 15:20:47.234367 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.104787 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47fxr" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="registry-server" containerID="cri-o://588f38cedadf18ad860e4f37eb203f872fe48423f61f8af030c3826a3778b127" gracePeriod=2 Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.105026 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"94eec7e9-06e0-4096-8b0e-89a012fb3495","Type":"ContainerStarted","Data":"4f9106fccd99c73cdb215dcfce1d670c48e105b9e0ba439abef957bb345698bd"} Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.105248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f89c7d3f-93e9-464e-bf10-a2df33402031","Type":"ContainerStarted","Data":"da9dcda945da269241e1fa27ad9f9ab425dc6b25b3dbb80f0768825e2a6acfd1"} Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.105458 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.133304 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=45.133280087 podStartE2EDuration="45.133280087s" podCreationTimestamp="2025-11-25 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:48.130841927 +0000 UTC m=+1680.782984348" watchObservedRunningTime="2025-11-25 15:20:48.133280087 +0000 UTC m=+1680.785422498" Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.860422 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5"] Nov 25 15:20:48 crc kubenswrapper[4806]: W1125 15:20:48.860967 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2cd3c61a_f9b2_4746_ba1d_226aea23d908.slice/crio-2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a WatchSource:0}: Error finding container 2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a: Status 404 returned error can't find the container with id 2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.864455 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.934550 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:20:48 crc kubenswrapper[4806]: I1125 15:20:48.934620 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:20:49 crc kubenswrapper[4806]: I1125 15:20:49.109503 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" event={"ID":"2cd3c61a-f9b2-4746-ba1d-226aea23d908","Type":"ContainerStarted","Data":"2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a"} Nov 25 15:20:49 crc kubenswrapper[4806]: I1125 15:20:49.109860 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:20:49 crc kubenswrapper[4806]: I1125 15:20:49.344117 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:49 crc kubenswrapper[4806]: I1125 15:20:49.364993 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.364957402 podStartE2EDuration="42.364957402s" podCreationTimestamp="2025-11-25 15:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:20:49.138794371 +0000 UTC m=+1681.790936782" watchObservedRunningTime="2025-11-25 15:20:49.364957402 +0000 UTC m=+1682.017099833" Nov 25 15:20:49 crc kubenswrapper[4806]: I1125 15:20:49.394334 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.123560 4806 generic.go:334] "Generic (PLEG): container finished" podID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerID="588f38cedadf18ad860e4f37eb203f872fe48423f61f8af030c3826a3778b127" exitCode=0 Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.124283 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7vzjq" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="registry-server" containerID="cri-o://da8db0c3afe85e4320e3d5cad4d8333df7360c34e5a30a3ac246c0467d476761" gracePeriod=2 Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.124864 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerDied","Data":"588f38cedadf18ad860e4f37eb203f872fe48423f61f8af030c3826a3778b127"} Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.320915 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.400981 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content\") pod \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.401213 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities\") pod \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.401392 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4swtk\" (UniqueName: \"kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk\") pod \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\" (UID: \"9f6180f7-ceb3-4de3-a203-23f6d36cf75d\") " Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.403535 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities" (OuterVolumeSpecName: "utilities") pod "9f6180f7-ceb3-4de3-a203-23f6d36cf75d" (UID: "9f6180f7-ceb3-4de3-a203-23f6d36cf75d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.422222 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk" (OuterVolumeSpecName: "kube-api-access-4swtk") pod "9f6180f7-ceb3-4de3-a203-23f6d36cf75d" (UID: "9f6180f7-ceb3-4de3-a203-23f6d36cf75d"). InnerVolumeSpecName "kube-api-access-4swtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.505190 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:50 crc kubenswrapper[4806]: I1125 15:20:50.505226 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4swtk\" (UniqueName: \"kubernetes.io/projected/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-kube-api-access-4swtk\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.137131 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47fxr" event={"ID":"9f6180f7-ceb3-4de3-a203-23f6d36cf75d","Type":"ContainerDied","Data":"3e3867f1c2f648a756ff47ca928d5a51b8e0d7603303151c5a6450993c4f8533"} Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.137221 4806 scope.go:117] "RemoveContainer" containerID="588f38cedadf18ad860e4f37eb203f872fe48423f61f8af030c3826a3778b127" Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.137147 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47fxr" Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.143494 4806 generic.go:334] "Generic (PLEG): container finished" podID="fe294208-726c-429d-a144-74fd096f1a63" containerID="da8db0c3afe85e4320e3d5cad4d8333df7360c34e5a30a3ac246c0467d476761" exitCode=0 Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.143536 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerDied","Data":"da8db0c3afe85e4320e3d5cad4d8333df7360c34e5a30a3ac246c0467d476761"} Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.163553 4806 scope.go:117] "RemoveContainer" containerID="61b028df0e12751f727cd2f0a3d9532d59b7c4e21152e9192612c8428b641816" Nov 25 15:20:51 crc kubenswrapper[4806]: I1125 15:20:51.199991 4806 scope.go:117] "RemoveContainer" containerID="a145f771e312bdaf6f03bd6813f870469f624805ed2704e8802dde7cb6c091e0" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.220678 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f6180f7-ceb3-4de3-a203-23f6d36cf75d" (UID: "9f6180f7-ceb3-4de3-a203-23f6d36cf75d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.246690 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f6180f7-ceb3-4de3-a203-23f6d36cf75d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.467542 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.474710 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.488552 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47fxr"] Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.568416 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities\") pod \"fe294208-726c-429d-a144-74fd096f1a63\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.568491 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdw8g\" (UniqueName: \"kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g\") pod \"fe294208-726c-429d-a144-74fd096f1a63\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.568839 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content\") pod \"fe294208-726c-429d-a144-74fd096f1a63\" (UID: \"fe294208-726c-429d-a144-74fd096f1a63\") " Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.569579 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities" (OuterVolumeSpecName: "utilities") pod "fe294208-726c-429d-a144-74fd096f1a63" (UID: "fe294208-726c-429d-a144-74fd096f1a63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.575718 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g" (OuterVolumeSpecName: "kube-api-access-jdw8g") pod "fe294208-726c-429d-a144-74fd096f1a63" (UID: "fe294208-726c-429d-a144-74fd096f1a63"). InnerVolumeSpecName "kube-api-access-jdw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.671513 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:52 crc kubenswrapper[4806]: I1125 15:20:52.671763 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdw8g\" (UniqueName: \"kubernetes.io/projected/fe294208-726c-429d-a144-74fd096f1a63-kube-api-access-jdw8g\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:53 crc kubenswrapper[4806]: I1125 15:20:53.185230 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7vzjq" event={"ID":"fe294208-726c-429d-a144-74fd096f1a63","Type":"ContainerDied","Data":"b955fc3f90914e8896aa401932eceb5215d4034c70023b8df27b6430524b436e"} Nov 25 15:20:53 crc kubenswrapper[4806]: I1125 15:20:53.185406 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7vzjq" Nov 25 15:20:53 crc kubenswrapper[4806]: I1125 15:20:53.185412 4806 scope.go:117] "RemoveContainer" containerID="da8db0c3afe85e4320e3d5cad4d8333df7360c34e5a30a3ac246c0467d476761" Nov 25 15:20:53 crc kubenswrapper[4806]: I1125 15:20:53.213894 4806 scope.go:117] "RemoveContainer" containerID="4b5dbda6d2cf3976511d3debe457f550fdffbe4767da155506011143b702c9c8" Nov 25 15:20:53 crc kubenswrapper[4806]: I1125 15:20:53.241185 4806 scope.go:117] "RemoveContainer" containerID="aebcd6405f04a3dd4553a0f3cf0bfef49cde024b4c1e6f459d255a20df5369d4" Nov 25 15:20:54 crc kubenswrapper[4806]: I1125 15:20:54.103187 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" path="/var/lib/kubelet/pods/9f6180f7-ceb3-4de3-a203-23f6d36cf75d/volumes" Nov 25 15:20:54 crc kubenswrapper[4806]: I1125 15:20:54.349175 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe294208-726c-429d-a144-74fd096f1a63" (UID: "fe294208-726c-429d-a144-74fd096f1a63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:54 crc kubenswrapper[4806]: I1125 15:20:54.411767 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe294208-726c-429d-a144-74fd096f1a63-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:54 crc kubenswrapper[4806]: I1125 15:20:54.423605 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:54 crc kubenswrapper[4806]: I1125 15:20:54.432814 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7vzjq"] Nov 25 15:20:56 crc kubenswrapper[4806]: I1125 15:20:56.101877 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe294208-726c-429d-a144-74fd096f1a63" path="/var/lib/kubelet/pods/fe294208-726c-429d-a144-74fd096f1a63/volumes" Nov 25 15:20:57 crc kubenswrapper[4806]: I1125 15:20:57.933536 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f89c7d3f-93e9-464e-bf10-a2df33402031" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.233:5671: connect: connection refused" Nov 25 15:21:04 crc kubenswrapper[4806]: I1125 15:21:04.122239 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="94eec7e9-06e0-4096-8b0e-89a012fb3495" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.232:5671: connect: connection refused" Nov 25 15:21:07 crc kubenswrapper[4806]: E1125 15:21:07.386516 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 25 15:21:07 crc kubenswrapper[4806]: E1125 15:21:07.387080 4806 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 15:21:07 crc kubenswrapper[4806]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 25 15:21:07 crc kubenswrapper[4806]: - hosts: all Nov 25 15:21:07 crc kubenswrapper[4806]: strategy: linear Nov 25 15:21:07 crc kubenswrapper[4806]: tasks: Nov 25 15:21:07 crc kubenswrapper[4806]: - name: Enable podified-repos Nov 25 15:21:07 crc kubenswrapper[4806]: become: true Nov 25 15:21:07 crc kubenswrapper[4806]: ansible.builtin.shell: | Nov 25 15:21:07 crc kubenswrapper[4806]: set -euxo pipefail Nov 25 15:21:07 crc kubenswrapper[4806]: pushd /var/tmp Nov 25 15:21:07 crc kubenswrapper[4806]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Nov 25 15:21:07 crc kubenswrapper[4806]: pushd repo-setup-main Nov 25 15:21:07 crc kubenswrapper[4806]: python3 -m venv ./venv Nov 25 15:21:07 crc kubenswrapper[4806]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Nov 25 15:21:07 crc kubenswrapper[4806]: ./venv/bin/repo-setup current-podified -b antelope Nov 25 15:21:07 crc kubenswrapper[4806]: popd Nov 25 15:21:07 crc kubenswrapper[4806]: rm -rf repo-setup-main Nov 25 15:21:07 crc kubenswrapper[4806]: Nov 25 15:21:07 crc kubenswrapper[4806]: Nov 25 15:21:07 crc kubenswrapper[4806]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 25 15:21:07 crc kubenswrapper[4806]: edpm_override_hosts: openstack-edpm-ipam Nov 25 15:21:07 crc kubenswrapper[4806]: edpm_service_type: repo-setup Nov 25 15:21:07 crc kubenswrapper[4806]: Nov 25 15:21:07 crc kubenswrapper[4806]: Nov 25 15:21:07 crc kubenswrapper[4806]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7pms,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5_openstack(2cd3c61a-f9b2-4746-ba1d-226aea23d908): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Nov 25 15:21:07 crc kubenswrapper[4806]: > logger="UnhandledError" Nov 25 15:21:07 crc kubenswrapper[4806]: E1125 15:21:07.388287 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" podUID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" Nov 25 15:21:07 crc kubenswrapper[4806]: I1125 15:21:07.933020 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f89c7d3f-93e9-464e-bf10-a2df33402031" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.233:5671: connect: connection refused" Nov 25 15:21:08 crc kubenswrapper[4806]: E1125 15:21:08.366301 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" podUID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" Nov 25 15:21:12 crc kubenswrapper[4806]: I1125 15:21:12.535742 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Nov 25 15:21:14 crc kubenswrapper[4806]: I1125 15:21:14.120363 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="94eec7e9-06e0-4096-8b0e-89a012fb3495" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.232:5671: connect: connection refused" Nov 25 15:21:15 crc kubenswrapper[4806]: I1125 15:21:15.526353 4806 scope.go:117] "RemoveContainer" containerID="d2e8d957dc50def02fcf69ce74a661d19b9438bf2106f3a93657490e54d7ca52" Nov 25 15:21:17 crc kubenswrapper[4806]: I1125 15:21:17.932164 4806 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f89c7d3f-93e9-464e-bf10-a2df33402031" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.233:5671: connect: connection refused" Nov 25 15:21:18 crc kubenswrapper[4806]: I1125 15:21:18.937807 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:21:18 crc kubenswrapper[4806]: I1125 15:21:18.938412 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:21:18 crc kubenswrapper[4806]: I1125 15:21:18.938463 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:21:18 crc kubenswrapper[4806]: I1125 15:21:18.939199 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:21:18 crc kubenswrapper[4806]: I1125 15:21:18.939253 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" gracePeriod=600 Nov 25 15:21:20 crc kubenswrapper[4806]: I1125 15:21:20.505850 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" exitCode=0 Nov 25 15:21:20 crc kubenswrapper[4806]: I1125 15:21:20.505922 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12"} Nov 25 15:21:20 crc kubenswrapper[4806]: I1125 15:21:20.506461 4806 scope.go:117] "RemoveContainer" containerID="e869f8a9a3bee9d5f6a66c81937d296e815282493a93356c044af918f3b7bdf1" Nov 25 15:21:20 crc kubenswrapper[4806]: E1125 15:21:20.695473 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:21:21 crc kubenswrapper[4806]: I1125 15:21:21.769822 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:21:21 crc kubenswrapper[4806]: E1125 15:21:21.770236 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:21:22 crc kubenswrapper[4806]: I1125 15:21:22.323220 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:21:22 crc kubenswrapper[4806]: I1125 15:21:22.782052 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" event={"ID":"2cd3c61a-f9b2-4746-ba1d-226aea23d908","Type":"ContainerStarted","Data":"692689fa3ca529532d73215be5076274aa3306a6882db740122e9e35fadbc4bd"} Nov 25 15:21:22 crc kubenswrapper[4806]: I1125 15:21:22.818750 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" podStartSLOduration=8.361907222 podStartE2EDuration="41.818731368s" podCreationTimestamp="2025-11-25 15:20:41 +0000 UTC" firstStartedPulling="2025-11-25 15:20:48.86407221 +0000 UTC m=+1681.516214621" lastFinishedPulling="2025-11-25 15:21:22.320896356 +0000 UTC m=+1714.973038767" observedRunningTime="2025-11-25 15:21:22.804935687 +0000 UTC m=+1715.457078108" watchObservedRunningTime="2025-11-25 15:21:22.818731368 +0000 UTC m=+1715.470873779" Nov 25 15:21:24 crc kubenswrapper[4806]: I1125 15:21:24.122141 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 15:21:27 crc kubenswrapper[4806]: I1125 15:21:27.935576 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:21:34 crc kubenswrapper[4806]: I1125 15:21:34.909874 4806 generic.go:334] "Generic (PLEG): container finished" podID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" containerID="692689fa3ca529532d73215be5076274aa3306a6882db740122e9e35fadbc4bd" exitCode=0 Nov 25 15:21:34 crc kubenswrapper[4806]: I1125 15:21:34.909966 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" event={"ID":"2cd3c61a-f9b2-4746-ba1d-226aea23d908","Type":"ContainerDied","Data":"692689fa3ca529532d73215be5076274aa3306a6882db740122e9e35fadbc4bd"} Nov 25 15:21:35 crc kubenswrapper[4806]: I1125 15:21:35.089023 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:21:35 crc kubenswrapper[4806]: E1125 15:21:35.089283 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.725165 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.911409 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory\") pod \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.911486 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7pms\" (UniqueName: \"kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms\") pod \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.911638 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle\") pod \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.911757 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key\") pod \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\" (UID: \"2cd3c61a-f9b2-4746-ba1d-226aea23d908\") " Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.917728 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2cd3c61a-f9b2-4746-ba1d-226aea23d908" (UID: "2cd3c61a-f9b2-4746-ba1d-226aea23d908"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.918386 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms" (OuterVolumeSpecName: "kube-api-access-x7pms") pod "2cd3c61a-f9b2-4746-ba1d-226aea23d908" (UID: "2cd3c61a-f9b2-4746-ba1d-226aea23d908"). InnerVolumeSpecName "kube-api-access-x7pms". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.931510 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" event={"ID":"2cd3c61a-f9b2-4746-ba1d-226aea23d908","Type":"ContainerDied","Data":"2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a"} Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.931553 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d88ad799518b3c3415036ca4ddb6315ddcc8826019c85299492c4b64758679a" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.931605 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.969607 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2cd3c61a-f9b2-4746-ba1d-226aea23d908" (UID: "2cd3c61a-f9b2-4746-ba1d-226aea23d908"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:21:36 crc kubenswrapper[4806]: I1125 15:21:36.972044 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory" (OuterVolumeSpecName: "inventory") pod "2cd3c61a-f9b2-4746-ba1d-226aea23d908" (UID: "2cd3c61a-f9b2-4746-ba1d-226aea23d908"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.014106 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.014183 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.014197 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7pms\" (UniqueName: \"kubernetes.io/projected/2cd3c61a-f9b2-4746-ba1d-226aea23d908-kube-api-access-x7pms\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.014219 4806 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cd3c61a-f9b2-4746-ba1d-226aea23d908-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.016771 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27"] Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017226 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017248 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017261 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="extract-utilities" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017267 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="extract-utilities" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017284 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017291 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017301 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="extract-content" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017307 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="extract-content" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017338 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="extract-utilities" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017347 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="extract-utilities" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017362 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="extract-content" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017368 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="extract-content" Nov 25 15:21:37 crc kubenswrapper[4806]: E1125 15:21:37.017382 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017389 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017581 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd3c61a-f9b2-4746-ba1d-226aea23d908" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017606 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f6180f7-ceb3-4de3-a203-23f6d36cf75d" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.017618 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe294208-726c-429d-a144-74fd096f1a63" containerName="registry-server" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.018421 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.027941 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27"] Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.218067 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpbcf\" (UniqueName: \"kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.218706 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.218813 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.320914 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.321012 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.321106 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpbcf\" (UniqueName: \"kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.326057 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.326128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.338891 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpbcf\" (UniqueName: \"kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-5hk27\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.371795 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:37 crc kubenswrapper[4806]: I1125 15:21:37.957624 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27"] Nov 25 15:21:38 crc kubenswrapper[4806]: I1125 15:21:38.953863 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" event={"ID":"4a338892-2bb8-41bf-aae0-d726d31e76b3","Type":"ContainerStarted","Data":"e08bd2d103c962c78644b9ee6cfda8a68b1f4d0a65798c2142704f0265e68baf"} Nov 25 15:21:39 crc kubenswrapper[4806]: I1125 15:21:39.966175 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" event={"ID":"4a338892-2bb8-41bf-aae0-d726d31e76b3","Type":"ContainerStarted","Data":"01b32f038125341713133fde15679eb7608171cb9eef20a8fc319b51f4d1cfe7"} Nov 25 15:21:39 crc kubenswrapper[4806]: I1125 15:21:39.988991 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" podStartSLOduration=2.976334352 podStartE2EDuration="3.988972425s" podCreationTimestamp="2025-11-25 15:21:36 +0000 UTC" firstStartedPulling="2025-11-25 15:21:37.98346204 +0000 UTC m=+1730.635604451" lastFinishedPulling="2025-11-25 15:21:38.996100113 +0000 UTC m=+1731.648242524" observedRunningTime="2025-11-25 15:21:39.978091691 +0000 UTC m=+1732.630234102" watchObservedRunningTime="2025-11-25 15:21:39.988972425 +0000 UTC m=+1732.641114836" Nov 25 15:21:42 crc kubenswrapper[4806]: I1125 15:21:42.997494 4806 generic.go:334] "Generic (PLEG): container finished" podID="4a338892-2bb8-41bf-aae0-d726d31e76b3" containerID="01b32f038125341713133fde15679eb7608171cb9eef20a8fc319b51f4d1cfe7" exitCode=0 Nov 25 15:21:42 crc kubenswrapper[4806]: I1125 15:21:42.997756 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" event={"ID":"4a338892-2bb8-41bf-aae0-d726d31e76b3","Type":"ContainerDied","Data":"01b32f038125341713133fde15679eb7608171cb9eef20a8fc319b51f4d1cfe7"} Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.556235 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.675106 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key\") pod \"4a338892-2bb8-41bf-aae0-d726d31e76b3\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.675463 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpbcf\" (UniqueName: \"kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf\") pod \"4a338892-2bb8-41bf-aae0-d726d31e76b3\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.675581 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory\") pod \"4a338892-2bb8-41bf-aae0-d726d31e76b3\" (UID: \"4a338892-2bb8-41bf-aae0-d726d31e76b3\") " Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.681910 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf" (OuterVolumeSpecName: "kube-api-access-jpbcf") pod "4a338892-2bb8-41bf-aae0-d726d31e76b3" (UID: "4a338892-2bb8-41bf-aae0-d726d31e76b3"). InnerVolumeSpecName "kube-api-access-jpbcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.713442 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory" (OuterVolumeSpecName: "inventory") pod "4a338892-2bb8-41bf-aae0-d726d31e76b3" (UID: "4a338892-2bb8-41bf-aae0-d726d31e76b3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.720161 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4a338892-2bb8-41bf-aae0-d726d31e76b3" (UID: "4a338892-2bb8-41bf-aae0-d726d31e76b3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.778657 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.778898 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpbcf\" (UniqueName: \"kubernetes.io/projected/4a338892-2bb8-41bf-aae0-d726d31e76b3-kube-api-access-jpbcf\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:44 crc kubenswrapper[4806]: I1125 15:21:44.778908 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4a338892-2bb8-41bf-aae0-d726d31e76b3-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.020678 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" event={"ID":"4a338892-2bb8-41bf-aae0-d726d31e76b3","Type":"ContainerDied","Data":"e08bd2d103c962c78644b9ee6cfda8a68b1f4d0a65798c2142704f0265e68baf"} Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.020736 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08bd2d103c962c78644b9ee6cfda8a68b1f4d0a65798c2142704f0265e68baf" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.020752 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-5hk27" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.089748 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt"] Nov 25 15:21:45 crc kubenswrapper[4806]: E1125 15:21:45.090299 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a338892-2bb8-41bf-aae0-d726d31e76b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.090339 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a338892-2bb8-41bf-aae0-d726d31e76b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.090621 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a338892-2bb8-41bf-aae0-d726d31e76b3" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.092778 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.095133 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.095481 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.095679 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.099375 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.103512 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt"] Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.187443 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.187693 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.187763 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl4hm\" (UniqueName: \"kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.188132 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.290260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.290697 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.290814 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl4hm\" (UniqueName: \"kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.290974 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.294910 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.295717 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.302833 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.312010 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl4hm\" (UniqueName: \"kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:45 crc kubenswrapper[4806]: I1125 15:21:45.416352 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:21:46 crc kubenswrapper[4806]: I1125 15:21:46.005837 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt"] Nov 25 15:21:46 crc kubenswrapper[4806]: I1125 15:21:46.034177 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" event={"ID":"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1","Type":"ContainerStarted","Data":"3ba8ba5f046612c940b04cddef59044bf5fa5474fe439a8cb3bd23ec82a9cb6b"} Nov 25 15:21:48 crc kubenswrapper[4806]: I1125 15:21:48.081290 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" event={"ID":"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1","Type":"ContainerStarted","Data":"559e2405756b27282dc8648955b2b0153265104ea4b6207f2ed63ec7ad888c6a"} Nov 25 15:21:48 crc kubenswrapper[4806]: I1125 15:21:48.121644 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" podStartSLOduration=2.067295246 podStartE2EDuration="3.121623333s" podCreationTimestamp="2025-11-25 15:21:45 +0000 UTC" firstStartedPulling="2025-11-25 15:21:46.016084514 +0000 UTC m=+1738.668226925" lastFinishedPulling="2025-11-25 15:21:47.070412601 +0000 UTC m=+1739.722555012" observedRunningTime="2025-11-25 15:21:48.10880634 +0000 UTC m=+1740.760948751" watchObservedRunningTime="2025-11-25 15:21:48.121623333 +0000 UTC m=+1740.773765744" Nov 25 15:21:50 crc kubenswrapper[4806]: I1125 15:21:50.091082 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:21:50 crc kubenswrapper[4806]: E1125 15:21:50.091999 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:22:05 crc kubenswrapper[4806]: I1125 15:22:05.089626 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:22:05 crc kubenswrapper[4806]: E1125 15:22:05.090378 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:22:15 crc kubenswrapper[4806]: I1125 15:22:15.800936 4806 scope.go:117] "RemoveContainer" containerID="5398fc780dd3f6e0342d1fa9cf2d3a259707ea0309bf1888b0e68c8e77508657" Nov 25 15:22:20 crc kubenswrapper[4806]: I1125 15:22:20.089494 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:22:20 crc kubenswrapper[4806]: E1125 15:22:20.090297 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:22:33 crc kubenswrapper[4806]: I1125 15:22:33.089817 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:22:33 crc kubenswrapper[4806]: E1125 15:22:33.090534 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:22:45 crc kubenswrapper[4806]: I1125 15:22:45.089888 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:22:45 crc kubenswrapper[4806]: E1125 15:22:45.090652 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:22:56 crc kubenswrapper[4806]: I1125 15:22:56.089354 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:22:56 crc kubenswrapper[4806]: E1125 15:22:56.090263 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:23:09 crc kubenswrapper[4806]: I1125 15:23:09.090126 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:23:09 crc kubenswrapper[4806]: E1125 15:23:09.092136 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:23:15 crc kubenswrapper[4806]: I1125 15:23:15.878942 4806 scope.go:117] "RemoveContainer" containerID="63213603e00965e9462d2d20b54f42e994509ed0cdfaf078ae93783aa6203c46" Nov 25 15:23:15 crc kubenswrapper[4806]: I1125 15:23:15.923786 4806 scope.go:117] "RemoveContainer" containerID="c1e4c54e37651b2c1357e47818fd8913f1f44f7dcb8d652d14ffb66ea69f813f" Nov 25 15:23:15 crc kubenswrapper[4806]: I1125 15:23:15.992158 4806 scope.go:117] "RemoveContainer" containerID="c08ba412b5d4d33ac6ee7c89d112c6de84041ad33172d269b029b4c8fd2bd177" Nov 25 15:23:24 crc kubenswrapper[4806]: I1125 15:23:24.090786 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:23:24 crc kubenswrapper[4806]: E1125 15:23:24.094165 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:23:35 crc kubenswrapper[4806]: I1125 15:23:35.089826 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:23:35 crc kubenswrapper[4806]: E1125 15:23:35.090616 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:23:49 crc kubenswrapper[4806]: I1125 15:23:49.074542 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-qgsv9"] Nov 25 15:23:49 crc kubenswrapper[4806]: I1125 15:23:49.092701 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-qgsv9"] Nov 25 15:23:50 crc kubenswrapper[4806]: I1125 15:23:50.089678 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:23:50 crc kubenswrapper[4806]: E1125 15:23:50.089983 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:23:50 crc kubenswrapper[4806]: I1125 15:23:50.104176 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f31c89-0010-494d-a1d5-2db4958b10d6" path="/var/lib/kubelet/pods/59f31c89-0010-494d-a1d5-2db4958b10d6/volumes" Nov 25 15:23:51 crc kubenswrapper[4806]: I1125 15:23:51.028208 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kqrd2"] Nov 25 15:23:51 crc kubenswrapper[4806]: I1125 15:23:51.039021 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-18c6-account-create-cchzq"] Nov 25 15:23:51 crc kubenswrapper[4806]: I1125 15:23:51.049295 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kqrd2"] Nov 25 15:23:51 crc kubenswrapper[4806]: I1125 15:23:51.059520 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-18c6-account-create-cchzq"] Nov 25 15:23:52 crc kubenswrapper[4806]: I1125 15:23:52.124618 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df1cd59-5e8a-49c9-af33-4547720713f0" path="/var/lib/kubelet/pods/5df1cd59-5e8a-49c9-af33-4547720713f0/volumes" Nov 25 15:23:52 crc kubenswrapper[4806]: I1125 15:23:52.129820 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1e02da-f4bb-4165-b4fc-cf65955994ae" path="/var/lib/kubelet/pods/ce1e02da-f4bb-4165-b4fc-cf65955994ae/volumes" Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.035141 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-0f2c-account-create-8xlqc"] Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.048032 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-0f2c-account-create-8xlqc"] Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.062749 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d2f7-account-create-6rgcw"] Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.078704 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-wcr7b"] Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.097640 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-wcr7b"] Nov 25 15:23:53 crc kubenswrapper[4806]: I1125 15:23:53.115191 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d2f7-account-create-6rgcw"] Nov 25 15:23:54 crc kubenswrapper[4806]: I1125 15:23:54.100806 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a1a10de-31c3-4413-b032-d10713c953dc" path="/var/lib/kubelet/pods/7a1a10de-31c3-4413-b032-d10713c953dc/volumes" Nov 25 15:23:54 crc kubenswrapper[4806]: I1125 15:23:54.103156 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b13266-e80b-4462-b7fa-04b5043e53e1" path="/var/lib/kubelet/pods/94b13266-e80b-4462-b7fa-04b5043e53e1/volumes" Nov 25 15:23:54 crc kubenswrapper[4806]: I1125 15:23:54.106165 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5bc050-6822-4de5-923b-3e02b79d8429" path="/var/lib/kubelet/pods/bf5bc050-6822-4de5-923b-3e02b79d8429/volumes" Nov 25 15:24:03 crc kubenswrapper[4806]: I1125 15:24:03.089550 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:24:03 crc kubenswrapper[4806]: E1125 15:24:03.090401 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.071407 4806 scope.go:117] "RemoveContainer" containerID="19a6fa7a843252997e2005e4df582751e52d23566f1ce16e60ea9b20b8465703" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.165177 4806 scope.go:117] "RemoveContainer" containerID="7dd6b5cd5f55ebd9a80ea781b536b148995d40ed2e05df5588478761a5554679" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.212627 4806 scope.go:117] "RemoveContainer" containerID="0a587abb354d154ccd1c7be46a4a958ef36828c6702d65f3f2275091ace9f013" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.274901 4806 scope.go:117] "RemoveContainer" containerID="e6406ff971d1adca3fd15dec5d6a15c57838e96fca8cd1db81f956eadce857ce" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.301746 4806 scope.go:117] "RemoveContainer" containerID="7af674775fbcc2a8d57d7adae882c91b14c9ef52b330d8f387ff61b1380c8913" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.327366 4806 scope.go:117] "RemoveContainer" containerID="f6cebcfc304fe6aec46892612797c6e415e5ff5ea49135e94c17e8ba009af731" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.359598 4806 scope.go:117] "RemoveContainer" containerID="e62c96193e09b1f729f09e6c5235cf6da512c6a9ee464384eae9e55a5fd5890a" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.427153 4806 scope.go:117] "RemoveContainer" containerID="fa7d6923be1a003c17b1865ed6b9c51c49958cbfad7ac5311061052305d8557b" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.465278 4806 scope.go:117] "RemoveContainer" containerID="947649f363aa13ff28038b734854fcca4bbe2dca64bfd8d62afca5a1df53eb31" Nov 25 15:24:16 crc kubenswrapper[4806]: I1125 15:24:16.500968 4806 scope.go:117] "RemoveContainer" containerID="1e2153b2ab05f8e43c1b85e49aaebc818222fcd6e34dce130b6c25633846fc60" Nov 25 15:24:18 crc kubenswrapper[4806]: I1125 15:24:18.096883 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:24:18 crc kubenswrapper[4806]: E1125 15:24:18.097493 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:24:29 crc kubenswrapper[4806]: I1125 15:24:29.092251 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:24:29 crc kubenswrapper[4806]: E1125 15:24:29.093155 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.045247 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-k5fg9"] Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.059069 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-d5b6-account-create-8g9dc"] Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.077411 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-4sh7f"] Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.077467 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-k5fg9"] Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.087092 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-4sh7f"] Nov 25 15:24:35 crc kubenswrapper[4806]: I1125 15:24:35.098600 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-d5b6-account-create-8g9dc"] Nov 25 15:24:36 crc kubenswrapper[4806]: I1125 15:24:36.140542 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d7a2080-b9b4-4a5d-8c23-905ee26d6afa" path="/var/lib/kubelet/pods/2d7a2080-b9b4-4a5d-8c23-905ee26d6afa/volumes" Nov 25 15:24:36 crc kubenswrapper[4806]: I1125 15:24:36.149088 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e37330-4341-48fc-b9d5-bd0403e6237a" path="/var/lib/kubelet/pods/79e37330-4341-48fc-b9d5-bd0403e6237a/volumes" Nov 25 15:24:36 crc kubenswrapper[4806]: I1125 15:24:36.182055 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94278b3c-2207-463b-9700-e8ab16c72b5b" path="/var/lib/kubelet/pods/94278b3c-2207-463b-9700-e8ab16c72b5b/volumes" Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.037731 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5265-account-create-vr75r"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.049645 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-8c9d-account-create-rx52p"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.064224 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-vrlpg"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.072649 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rknkz"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.081680 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-56ac-account-create-vvjww"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.102941 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-vrlpg"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.102982 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5265-account-create-vr75r"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.109480 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rknkz"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.118112 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-8c9d-account-create-rx52p"] Nov 25 15:24:38 crc kubenswrapper[4806]: I1125 15:24:38.125822 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-56ac-account-create-vvjww"] Nov 25 15:24:40 crc kubenswrapper[4806]: I1125 15:24:40.127578 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eccd330-3d33-48e3-929b-2a67bb643af7" path="/var/lib/kubelet/pods/5eccd330-3d33-48e3-929b-2a67bb643af7/volumes" Nov 25 15:24:40 crc kubenswrapper[4806]: I1125 15:24:40.132289 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61467ee5-3ddb-4d7d-88d3-e48107c51338" path="/var/lib/kubelet/pods/61467ee5-3ddb-4d7d-88d3-e48107c51338/volumes" Nov 25 15:24:40 crc kubenswrapper[4806]: I1125 15:24:40.133890 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cc8598-cf68-4bb3-b272-ab87683edf6b" path="/var/lib/kubelet/pods/62cc8598-cf68-4bb3-b272-ab87683edf6b/volumes" Nov 25 15:24:40 crc kubenswrapper[4806]: I1125 15:24:40.136562 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67e9a65f-5f3c-47fa-964a-f188158f77bc" path="/var/lib/kubelet/pods/67e9a65f-5f3c-47fa-964a-f188158f77bc/volumes" Nov 25 15:24:40 crc kubenswrapper[4806]: I1125 15:24:40.139263 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9115000-6aab-492e-925f-f44a574b5009" path="/var/lib/kubelet/pods/a9115000-6aab-492e-925f-f44a574b5009/volumes" Nov 25 15:24:42 crc kubenswrapper[4806]: I1125 15:24:42.089007 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:24:42 crc kubenswrapper[4806]: E1125 15:24:42.089600 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:24:53 crc kubenswrapper[4806]: I1125 15:24:53.046103 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-xnqxm"] Nov 25 15:24:53 crc kubenswrapper[4806]: I1125 15:24:53.061333 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-xnqxm"] Nov 25 15:24:53 crc kubenswrapper[4806]: I1125 15:24:53.090307 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:24:53 crc kubenswrapper[4806]: E1125 15:24:53.090638 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:24:54 crc kubenswrapper[4806]: I1125 15:24:54.106722 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="634468c1-6446-422a-9816-b19afdf8858d" path="/var/lib/kubelet/pods/634468c1-6446-422a-9816-b19afdf8858d/volumes" Nov 25 15:25:06 crc kubenswrapper[4806]: I1125 15:25:06.089663 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:25:06 crc kubenswrapper[4806]: E1125 15:25:06.090670 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:25:16 crc kubenswrapper[4806]: I1125 15:25:16.746218 4806 scope.go:117] "RemoveContainer" containerID="5f235c5c1d09c3398a1b4d4f6cc9714f67f41ebc8f93c551803863da553f2955" Nov 25 15:25:16 crc kubenswrapper[4806]: I1125 15:25:16.800164 4806 scope.go:117] "RemoveContainer" containerID="dcd4f748224faf941aef0075ac2b144712f1dc2665b6fe13a338d43bebd29ae7" Nov 25 15:25:16 crc kubenswrapper[4806]: I1125 15:25:16.887523 4806 scope.go:117] "RemoveContainer" containerID="fcf783791588ec718ca7cc8d58556d5da256261cab4042709ceb061a6a9bba63" Nov 25 15:25:16 crc kubenswrapper[4806]: I1125 15:25:16.916830 4806 scope.go:117] "RemoveContainer" containerID="08383c7c22f34f950c15aabb4fe56b4bf61f9ca2db81584bfe5201891f079251" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.001585 4806 scope.go:117] "RemoveContainer" containerID="b5d072863c76d7b6c081ccaa02c0a78121cdc2061a426894749b4048300332ee" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.027838 4806 scope.go:117] "RemoveContainer" containerID="d6ae052d0e9dd5d5ef41e888eccca8a25ae005c1d2bc396324b9fe50a7646f1c" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.089558 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:25:17 crc kubenswrapper[4806]: E1125 15:25:17.089882 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.098110 4806 scope.go:117] "RemoveContainer" containerID="055d68e8d3b049aa80cb3e5340ffb854d62c839a8235381b11f1b6dd5db0579c" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.135161 4806 scope.go:117] "RemoveContainer" containerID="dcaddd1007730a613ec5b775a5257a05c342aaec816b5cde44f715a00e08a792" Nov 25 15:25:17 crc kubenswrapper[4806]: I1125 15:25:17.178222 4806 scope.go:117] "RemoveContainer" containerID="21b1e6a3dcb8fbafe003b9fa097d1bf3a9a766d92e86a62bfe3a3708f22473dd" Nov 25 15:25:25 crc kubenswrapper[4806]: I1125 15:25:25.042465 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-n88tp"] Nov 25 15:25:25 crc kubenswrapper[4806]: I1125 15:25:25.051058 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-n88tp"] Nov 25 15:25:26 crc kubenswrapper[4806]: I1125 15:25:26.102275 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e521a6-108d-45db-ad10-42e394a9cd1a" path="/var/lib/kubelet/pods/e7e521a6-108d-45db-ad10-42e394a9cd1a/volumes" Nov 25 15:25:31 crc kubenswrapper[4806]: I1125 15:25:31.089477 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:25:31 crc kubenswrapper[4806]: E1125 15:25:31.090381 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:25:31 crc kubenswrapper[4806]: I1125 15:25:31.457967 4806 generic.go:334] "Generic (PLEG): container finished" podID="1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" containerID="559e2405756b27282dc8648955b2b0153265104ea4b6207f2ed63ec7ad888c6a" exitCode=0 Nov 25 15:25:31 crc kubenswrapper[4806]: I1125 15:25:31.458073 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" event={"ID":"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1","Type":"ContainerDied","Data":"559e2405756b27282dc8648955b2b0153265104ea4b6207f2ed63ec7ad888c6a"} Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.013784 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.163223 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key\") pod \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.163274 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle\") pod \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.163339 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory\") pod \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.163501 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl4hm\" (UniqueName: \"kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm\") pod \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\" (UID: \"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1\") " Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.169257 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm" (OuterVolumeSpecName: "kube-api-access-rl4hm") pod "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" (UID: "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1"). InnerVolumeSpecName "kube-api-access-rl4hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.169601 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" (UID: "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.194668 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory" (OuterVolumeSpecName: "inventory") pod "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" (UID: "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.201981 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" (UID: "1e02aa69-d4ed-4a30-8c3f-2fe2021298d1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.266449 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.266487 4806 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.266504 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.266516 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl4hm\" (UniqueName: \"kubernetes.io/projected/1e02aa69-d4ed-4a30-8c3f-2fe2021298d1-kube-api-access-rl4hm\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.485743 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" event={"ID":"1e02aa69-d4ed-4a30-8c3f-2fe2021298d1","Type":"ContainerDied","Data":"3ba8ba5f046612c940b04cddef59044bf5fa5474fe439a8cb3bd23ec82a9cb6b"} Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.485794 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ba8ba5f046612c940b04cddef59044bf5fa5474fe439a8cb3bd23ec82a9cb6b" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.485823 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.565574 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj"] Nov 25 15:25:33 crc kubenswrapper[4806]: E1125 15:25:33.566211 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.566230 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.566513 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e02aa69-d4ed-4a30-8c3f-2fe2021298d1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.567441 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.569931 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.570099 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.571079 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.571238 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.578076 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj"] Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.674442 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gbqc\" (UniqueName: \"kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.674788 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.675033 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.777176 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gbqc\" (UniqueName: \"kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.777261 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.777445 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.782138 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.785148 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.794956 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gbqc\" (UniqueName: \"kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:33 crc kubenswrapper[4806]: I1125 15:25:33.926455 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:25:34 crc kubenswrapper[4806]: I1125 15:25:34.509602 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj"] Nov 25 15:25:35 crc kubenswrapper[4806]: I1125 15:25:35.507616 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" event={"ID":"e47040af-0961-465d-a57d-b5a86d51d814","Type":"ContainerStarted","Data":"5c68fcd8bad0e4ec57497bc25f01ec4eaf3dca73789bfa737e3cd384c79ccd47"} Nov 25 15:25:36 crc kubenswrapper[4806]: I1125 15:25:36.519259 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" event={"ID":"e47040af-0961-465d-a57d-b5a86d51d814","Type":"ContainerStarted","Data":"48d6587709bf80e98901b52afb7eb0fe8004dff401779bb5dc847987cbebeada"} Nov 25 15:25:37 crc kubenswrapper[4806]: I1125 15:25:37.551997 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" podStartSLOduration=2.939474487 podStartE2EDuration="4.551979232s" podCreationTimestamp="2025-11-25 15:25:33 +0000 UTC" firstStartedPulling="2025-11-25 15:25:34.513628497 +0000 UTC m=+1967.165770908" lastFinishedPulling="2025-11-25 15:25:36.126133242 +0000 UTC m=+1968.778275653" observedRunningTime="2025-11-25 15:25:37.546568339 +0000 UTC m=+1970.198710770" watchObservedRunningTime="2025-11-25 15:25:37.551979232 +0000 UTC m=+1970.204121633" Nov 25 15:25:43 crc kubenswrapper[4806]: I1125 15:25:43.090024 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:25:43 crc kubenswrapper[4806]: E1125 15:25:43.090778 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.044700 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fcs94"] Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.055029 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fcs94"] Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.066123 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-bqhxc"] Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.076510 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-bqhxc"] Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.100845 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea45747-c756-4447-b140-e6bc10188ec3" path="/var/lib/kubelet/pods/1ea45747-c756-4447-b140-e6bc10188ec3/volumes" Nov 25 15:25:46 crc kubenswrapper[4806]: I1125 15:25:46.101524 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1" path="/var/lib/kubelet/pods/a58a488e-b4cb-42cb-8bc4-4a467bbb5dd1/volumes" Nov 25 15:25:47 crc kubenswrapper[4806]: I1125 15:25:47.028349 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-n7cnj"] Nov 25 15:25:47 crc kubenswrapper[4806]: I1125 15:25:47.038620 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-n7cnj"] Nov 25 15:25:48 crc kubenswrapper[4806]: I1125 15:25:48.107061 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08c00715-2142-4aef-ae81-16ce4c5cba4d" path="/var/lib/kubelet/pods/08c00715-2142-4aef-ae81-16ce4c5cba4d/volumes" Nov 25 15:25:57 crc kubenswrapper[4806]: I1125 15:25:57.090092 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:25:57 crc kubenswrapper[4806]: E1125 15:25:57.090951 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:26:07 crc kubenswrapper[4806]: I1125 15:26:07.027997 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7lfx4"] Nov 25 15:26:07 crc kubenswrapper[4806]: I1125 15:26:07.039450 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7lfx4"] Nov 25 15:26:08 crc kubenswrapper[4806]: I1125 15:26:08.098382 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:26:08 crc kubenswrapper[4806]: E1125 15:26:08.099125 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:26:08 crc kubenswrapper[4806]: I1125 15:26:08.106410 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e7e600-c1a4-4bda-910b-c11fe9411cc9" path="/var/lib/kubelet/pods/a2e7e600-c1a4-4bda-910b-c11fe9411cc9/volumes" Nov 25 15:26:17 crc kubenswrapper[4806]: I1125 15:26:17.494668 4806 scope.go:117] "RemoveContainer" containerID="2868621162c88a865d5cebb0a7e16b006a8fa6ffff07a11570251357df8e94f2" Nov 25 15:26:17 crc kubenswrapper[4806]: I1125 15:26:17.534460 4806 scope.go:117] "RemoveContainer" containerID="488d16663693ff36bf08ba56f9af112e7989574bba046f316154e3a2b8bf79b6" Nov 25 15:26:17 crc kubenswrapper[4806]: I1125 15:26:17.584872 4806 scope.go:117] "RemoveContainer" containerID="706f4aa3780c37be61f5872cab7a0bd985ca6ac579fc96ba25423056c7cce6d8" Nov 25 15:26:17 crc kubenswrapper[4806]: I1125 15:26:17.625086 4806 scope.go:117] "RemoveContainer" containerID="26856fbbbb17a66486678883159fe82fc8417d94000dd929bd71bdf008e1a237" Nov 25 15:26:17 crc kubenswrapper[4806]: I1125 15:26:17.686519 4806 scope.go:117] "RemoveContainer" containerID="bfce09d698f1f48b17a93b00e987a4e0e12f30f045ee8310782611fa29bbfac3" Nov 25 15:26:21 crc kubenswrapper[4806]: I1125 15:26:21.090372 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:26:21 crc kubenswrapper[4806]: I1125 15:26:21.957022 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a"} Nov 25 15:26:25 crc kubenswrapper[4806]: I1125 15:26:25.046394 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2nbxh"] Nov 25 15:26:25 crc kubenswrapper[4806]: I1125 15:26:25.059489 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2nbxh"] Nov 25 15:26:26 crc kubenswrapper[4806]: I1125 15:26:26.101960 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11aeb498-3614-4aac-a381-9bf0392cf5dc" path="/var/lib/kubelet/pods/11aeb498-3614-4aac-a381-9bf0392cf5dc/volumes" Nov 25 15:26:59 crc kubenswrapper[4806]: I1125 15:26:59.066473 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d57xj"] Nov 25 15:26:59 crc kubenswrapper[4806]: I1125 15:26:59.076211 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d57xj"] Nov 25 15:26:59 crc kubenswrapper[4806]: I1125 15:26:59.090396 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-t9tkg"] Nov 25 15:26:59 crc kubenswrapper[4806]: I1125 15:26:59.098074 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-t9tkg"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.035461 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-26a0-account-create-vlfqj"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.046570 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e9d6-account-create-f69l5"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.055559 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dd45f"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.066599 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-26a0-account-create-vlfqj"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.075125 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e9d6-account-create-f69l5"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.083278 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dd45f"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.102375 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325b6686-f8e5-4ba8-b274-7e3508888807" path="/var/lib/kubelet/pods/325b6686-f8e5-4ba8-b274-7e3508888807/volumes" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.103030 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e92cdcb-b78b-47cb-ba65-9167485d9795" path="/var/lib/kubelet/pods/4e92cdcb-b78b-47cb-ba65-9167485d9795/volumes" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.103810 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6b52df6-253b-4082-8e20-dc729af9ce15" path="/var/lib/kubelet/pods/c6b52df6-253b-4082-8e20-dc729af9ce15/volumes" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.106453 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd" path="/var/lib/kubelet/pods/c7b2aa87-f218-472e-a8e8-7fe0eaf3b7cd/volumes" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.108447 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd64b415-9694-483d-b17d-aceffd50763a" path="/var/lib/kubelet/pods/fd64b415-9694-483d-b17d-aceffd50763a/volumes" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.108998 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-a493-account-create-cnxrz"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.109026 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-a493-account-create-cnxrz"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.190045 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.193003 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.199814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.381770 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.382256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.382303 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr92w\" (UniqueName: \"kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.484145 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.484219 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr92w\" (UniqueName: \"kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.484376 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.484861 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.484898 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.504769 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr92w\" (UniqueName: \"kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w\") pod \"redhat-operators-kxzst\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:00 crc kubenswrapper[4806]: I1125 15:27:00.536572 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:01 crc kubenswrapper[4806]: I1125 15:27:01.053656 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:01 crc kubenswrapper[4806]: W1125 15:27:01.054126 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478d1eb7_a443_4692_a605_b7ed450bfef1.slice/crio-32ce040bc4af4ac3f5ebc209a2f6b3d99932331f6a9266f59d90b44493bf7dad WatchSource:0}: Error finding container 32ce040bc4af4ac3f5ebc209a2f6b3d99932331f6a9266f59d90b44493bf7dad: Status 404 returned error can't find the container with id 32ce040bc4af4ac3f5ebc209a2f6b3d99932331f6a9266f59d90b44493bf7dad Nov 25 15:27:01 crc kubenswrapper[4806]: I1125 15:27:01.357485 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerStarted","Data":"32ce040bc4af4ac3f5ebc209a2f6b3d99932331f6a9266f59d90b44493bf7dad"} Nov 25 15:27:02 crc kubenswrapper[4806]: I1125 15:27:02.103726 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7defc7dc-b7b6-4302-82ed-15edce4862b3" path="/var/lib/kubelet/pods/7defc7dc-b7b6-4302-82ed-15edce4862b3/volumes" Nov 25 15:27:02 crc kubenswrapper[4806]: I1125 15:27:02.378601 4806 generic.go:334] "Generic (PLEG): container finished" podID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerID="3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0" exitCode=0 Nov 25 15:27:02 crc kubenswrapper[4806]: I1125 15:27:02.378683 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerDied","Data":"3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0"} Nov 25 15:27:02 crc kubenswrapper[4806]: I1125 15:27:02.382525 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:27:04 crc kubenswrapper[4806]: I1125 15:27:04.404399 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerStarted","Data":"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2"} Nov 25 15:27:14 crc kubenswrapper[4806]: I1125 15:27:14.522623 4806 generic.go:334] "Generic (PLEG): container finished" podID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerID="b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2" exitCode=0 Nov 25 15:27:14 crc kubenswrapper[4806]: I1125 15:27:14.522706 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerDied","Data":"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2"} Nov 25 15:27:15 crc kubenswrapper[4806]: I1125 15:27:15.539871 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerStarted","Data":"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a"} Nov 25 15:27:15 crc kubenswrapper[4806]: I1125 15:27:15.561819 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kxzst" podStartSLOduration=2.8418750189999997 podStartE2EDuration="15.561784248s" podCreationTimestamp="2025-11-25 15:27:00 +0000 UTC" firstStartedPulling="2025-11-25 15:27:02.382020057 +0000 UTC m=+2055.034162468" lastFinishedPulling="2025-11-25 15:27:15.101929286 +0000 UTC m=+2067.754071697" observedRunningTime="2025-11-25 15:27:15.557399194 +0000 UTC m=+2068.209541625" watchObservedRunningTime="2025-11-25 15:27:15.561784248 +0000 UTC m=+2068.213926729" Nov 25 15:27:17 crc kubenswrapper[4806]: I1125 15:27:17.848049 4806 scope.go:117] "RemoveContainer" containerID="75770c80babeeaf1288bbb487b06acbdab84838b6b68416b9d71444427565ed5" Nov 25 15:27:17 crc kubenswrapper[4806]: I1125 15:27:17.900744 4806 scope.go:117] "RemoveContainer" containerID="ba3f217dfe744df9233407d1e8e42525d299c0dbea011265bbc237093d9329af" Nov 25 15:27:17 crc kubenswrapper[4806]: I1125 15:27:17.949412 4806 scope.go:117] "RemoveContainer" containerID="75b85ef04466dea5f541526dd316e51ce813b304e50c00e16f985adbf61e36a6" Nov 25 15:27:18 crc kubenswrapper[4806]: I1125 15:27:18.113651 4806 scope.go:117] "RemoveContainer" containerID="4256cd15c189bc95acd3319070e18e3dfea95b540784c5b81e34178ca2c35ef5" Nov 25 15:27:18 crc kubenswrapper[4806]: I1125 15:27:18.218541 4806 scope.go:117] "RemoveContainer" containerID="400c011d07c55c1d8a814cdfa3278ffee3ae767ab54f9a17b167816e4ad0a723" Nov 25 15:27:18 crc kubenswrapper[4806]: I1125 15:27:18.363787 4806 scope.go:117] "RemoveContainer" containerID="2a8de8c8212ee429202474aa901025b9a3f54e94eff6b9698841f94008541e06" Nov 25 15:27:18 crc kubenswrapper[4806]: I1125 15:27:18.456144 4806 scope.go:117] "RemoveContainer" containerID="849013106acde87055135120c927c30c01399c6614e433c6deeeac64f4b10fbb" Nov 25 15:27:20 crc kubenswrapper[4806]: I1125 15:27:20.537088 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:20 crc kubenswrapper[4806]: I1125 15:27:20.537658 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:21 crc kubenswrapper[4806]: I1125 15:27:21.591027 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kxzst" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:21 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:21 crc kubenswrapper[4806]: > Nov 25 15:27:30 crc kubenswrapper[4806]: I1125 15:27:30.586582 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:30 crc kubenswrapper[4806]: I1125 15:27:30.652786 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:31 crc kubenswrapper[4806]: I1125 15:27:31.392066 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:31 crc kubenswrapper[4806]: I1125 15:27:31.713577 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kxzst" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="registry-server" containerID="cri-o://27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a" gracePeriod=2 Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.345072 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.511385 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr92w\" (UniqueName: \"kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w\") pod \"478d1eb7-a443-4692-a605-b7ed450bfef1\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.511457 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content\") pod \"478d1eb7-a443-4692-a605-b7ed450bfef1\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.511631 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities\") pod \"478d1eb7-a443-4692-a605-b7ed450bfef1\" (UID: \"478d1eb7-a443-4692-a605-b7ed450bfef1\") " Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.512588 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities" (OuterVolumeSpecName: "utilities") pod "478d1eb7-a443-4692-a605-b7ed450bfef1" (UID: "478d1eb7-a443-4692-a605-b7ed450bfef1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.517819 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w" (OuterVolumeSpecName: "kube-api-access-fr92w") pod "478d1eb7-a443-4692-a605-b7ed450bfef1" (UID: "478d1eb7-a443-4692-a605-b7ed450bfef1"). InnerVolumeSpecName "kube-api-access-fr92w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.615108 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr92w\" (UniqueName: \"kubernetes.io/projected/478d1eb7-a443-4692-a605-b7ed450bfef1-kube-api-access-fr92w\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.615395 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.656066 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "478d1eb7-a443-4692-a605-b7ed450bfef1" (UID: "478d1eb7-a443-4692-a605-b7ed450bfef1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.718424 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478d1eb7-a443-4692-a605-b7ed450bfef1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.730169 4806 generic.go:334] "Generic (PLEG): container finished" podID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerID="27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a" exitCode=0 Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.730218 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kxzst" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.730214 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerDied","Data":"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a"} Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.730350 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kxzst" event={"ID":"478d1eb7-a443-4692-a605-b7ed450bfef1","Type":"ContainerDied","Data":"32ce040bc4af4ac3f5ebc209a2f6b3d99932331f6a9266f59d90b44493bf7dad"} Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.730372 4806 scope.go:117] "RemoveContainer" containerID="27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.749974 4806 scope.go:117] "RemoveContainer" containerID="b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.768529 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.776945 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kxzst"] Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.782046 4806 scope.go:117] "RemoveContainer" containerID="3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.852067 4806 scope.go:117] "RemoveContainer" containerID="27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a" Nov 25 15:27:32 crc kubenswrapper[4806]: E1125 15:27:32.852608 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a\": container with ID starting with 27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a not found: ID does not exist" containerID="27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.852640 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a"} err="failed to get container status \"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a\": rpc error: code = NotFound desc = could not find container \"27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a\": container with ID starting with 27f62c8db294347500965c7b7570524b61b5de809c10c63e24e5f798a1feb58a not found: ID does not exist" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.852662 4806 scope.go:117] "RemoveContainer" containerID="b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2" Nov 25 15:27:32 crc kubenswrapper[4806]: E1125 15:27:32.852997 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2\": container with ID starting with b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2 not found: ID does not exist" containerID="b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.853039 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2"} err="failed to get container status \"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2\": rpc error: code = NotFound desc = could not find container \"b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2\": container with ID starting with b1b8a8c7b8bd68e429af88bc0cec1372641391991e03403e6de77ab897fa84e2 not found: ID does not exist" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.853064 4806 scope.go:117] "RemoveContainer" containerID="3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0" Nov 25 15:27:32 crc kubenswrapper[4806]: E1125 15:27:32.853419 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0\": container with ID starting with 3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0 not found: ID does not exist" containerID="3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0" Nov 25 15:27:32 crc kubenswrapper[4806]: I1125 15:27:32.853477 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0"} err="failed to get container status \"3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0\": rpc error: code = NotFound desc = could not find container \"3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0\": container with ID starting with 3af78e7bafe6b97c149ad8b200e24753ed2e63dd8567900048568071d1489fa0 not found: ID does not exist" Nov 25 15:27:33 crc kubenswrapper[4806]: I1125 15:27:33.742009 4806 generic.go:334] "Generic (PLEG): container finished" podID="e47040af-0961-465d-a57d-b5a86d51d814" containerID="48d6587709bf80e98901b52afb7eb0fe8004dff401779bb5dc847987cbebeada" exitCode=0 Nov 25 15:27:33 crc kubenswrapper[4806]: I1125 15:27:33.742088 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" event={"ID":"e47040af-0961-465d-a57d-b5a86d51d814","Type":"ContainerDied","Data":"48d6587709bf80e98901b52afb7eb0fe8004dff401779bb5dc847987cbebeada"} Nov 25 15:27:34 crc kubenswrapper[4806]: I1125 15:27:34.108047 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" path="/var/lib/kubelet/pods/478d1eb7-a443-4692-a605-b7ed450bfef1/volumes" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.249031 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.301789 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gbqc\" (UniqueName: \"kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc\") pod \"e47040af-0961-465d-a57d-b5a86d51d814\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.301967 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory\") pod \"e47040af-0961-465d-a57d-b5a86d51d814\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.302286 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key\") pod \"e47040af-0961-465d-a57d-b5a86d51d814\" (UID: \"e47040af-0961-465d-a57d-b5a86d51d814\") " Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.311751 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc" (OuterVolumeSpecName: "kube-api-access-4gbqc") pod "e47040af-0961-465d-a57d-b5a86d51d814" (UID: "e47040af-0961-465d-a57d-b5a86d51d814"). InnerVolumeSpecName "kube-api-access-4gbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.332009 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory" (OuterVolumeSpecName: "inventory") pod "e47040af-0961-465d-a57d-b5a86d51d814" (UID: "e47040af-0961-465d-a57d-b5a86d51d814"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.342588 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e47040af-0961-465d-a57d-b5a86d51d814" (UID: "e47040af-0961-465d-a57d-b5a86d51d814"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.406006 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.406759 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e47040af-0961-465d-a57d-b5a86d51d814-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.406845 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gbqc\" (UniqueName: \"kubernetes.io/projected/e47040af-0961-465d-a57d-b5a86d51d814-kube-api-access-4gbqc\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.763617 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" event={"ID":"e47040af-0961-465d-a57d-b5a86d51d814","Type":"ContainerDied","Data":"5c68fcd8bad0e4ec57497bc25f01ec4eaf3dca73789bfa737e3cd384c79ccd47"} Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.763658 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.763661 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c68fcd8bad0e4ec57497bc25f01ec4eaf3dca73789bfa737e3cd384c79ccd47" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.844908 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w"] Nov 25 15:27:35 crc kubenswrapper[4806]: E1125 15:27:35.845456 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="extract-content" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845479 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="extract-content" Nov 25 15:27:35 crc kubenswrapper[4806]: E1125 15:27:35.845492 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="registry-server" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845502 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="registry-server" Nov 25 15:27:35 crc kubenswrapper[4806]: E1125 15:27:35.845528 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="extract-utilities" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845537 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="extract-utilities" Nov 25 15:27:35 crc kubenswrapper[4806]: E1125 15:27:35.845547 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47040af-0961-465d-a57d-b5a86d51d814" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845555 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47040af-0961-465d-a57d-b5a86d51d814" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845795 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47040af-0961-465d-a57d-b5a86d51d814" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.845825 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="478d1eb7-a443-4692-a605-b7ed450bfef1" containerName="registry-server" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.846772 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.850125 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.850518 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.851674 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.852446 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.853542 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w"] Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.916089 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.916276 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjcmb\" (UniqueName: \"kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:35 crc kubenswrapper[4806]: I1125 15:27:35.916349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.017642 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.017784 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjcmb\" (UniqueName: \"kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.017823 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.022220 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.024814 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.034827 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjcmb\" (UniqueName: \"kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.181204 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:27:36 crc kubenswrapper[4806]: I1125 15:27:36.798607 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w"] Nov 25 15:27:36 crc kubenswrapper[4806]: W1125 15:27:36.828079 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ab11811_773f_477f_bb49_59c8dacf771f.slice/crio-3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5 WatchSource:0}: Error finding container 3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5: Status 404 returned error can't find the container with id 3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5 Nov 25 15:27:37 crc kubenswrapper[4806]: I1125 15:27:37.780588 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" event={"ID":"5ab11811-773f-477f-bb49-59c8dacf771f","Type":"ContainerStarted","Data":"3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5"} Nov 25 15:27:38 crc kubenswrapper[4806]: I1125 15:27:38.809494 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" event={"ID":"5ab11811-773f-477f-bb49-59c8dacf771f","Type":"ContainerStarted","Data":"20b08221aad76fe7c42303405ce43744564e1d37a8b9ee64fc751cad891f411e"} Nov 25 15:27:38 crc kubenswrapper[4806]: I1125 15:27:38.842245 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" podStartSLOduration=3.194154679 podStartE2EDuration="3.84222755s" podCreationTimestamp="2025-11-25 15:27:35 +0000 UTC" firstStartedPulling="2025-11-25 15:27:36.832535241 +0000 UTC m=+2089.484677652" lastFinishedPulling="2025-11-25 15:27:37.480608082 +0000 UTC m=+2090.132750523" observedRunningTime="2025-11-25 15:27:38.829832089 +0000 UTC m=+2091.481974520" watchObservedRunningTime="2025-11-25 15:27:38.84222755 +0000 UTC m=+2091.494369961" Nov 25 15:27:49 crc kubenswrapper[4806]: I1125 15:27:49.047949 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jknk"] Nov 25 15:27:49 crc kubenswrapper[4806]: I1125 15:27:49.056827 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-9jknk"] Nov 25 15:27:50 crc kubenswrapper[4806]: I1125 15:27:50.102856 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077d373d-365d-4520-8345-d6b636d212fd" path="/var/lib/kubelet/pods/077d373d-365d-4520-8345-d6b636d212fd/volumes" Nov 25 15:28:14 crc kubenswrapper[4806]: I1125 15:28:14.070344 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-rccfb"] Nov 25 15:28:14 crc kubenswrapper[4806]: I1125 15:28:14.079983 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-rccfb"] Nov 25 15:28:14 crc kubenswrapper[4806]: I1125 15:28:14.101087 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aabac61-808c-46a6-9cc1-e021cb244241" path="/var/lib/kubelet/pods/9aabac61-808c-46a6-9cc1-e021cb244241/volumes" Nov 25 15:28:15 crc kubenswrapper[4806]: I1125 15:28:15.026817 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9lkf4"] Nov 25 15:28:15 crc kubenswrapper[4806]: I1125 15:28:15.037459 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-9lkf4"] Nov 25 15:28:16 crc kubenswrapper[4806]: I1125 15:28:16.104233 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff83e435-76c3-4d0e-8887-a3c5fc1ea65c" path="/var/lib/kubelet/pods/ff83e435-76c3-4d0e-8887-a3c5fc1ea65c/volumes" Nov 25 15:28:18 crc kubenswrapper[4806]: I1125 15:28:18.763722 4806 scope.go:117] "RemoveContainer" containerID="8c5c302b90e501f8da855eb275d7729b3f90dedee5d5951e19c86fdc61b99866" Nov 25 15:28:18 crc kubenswrapper[4806]: I1125 15:28:18.810491 4806 scope.go:117] "RemoveContainer" containerID="3581d6c2acccaef0de95a7122d2e608df6ae0a81a11ff2636f1f7ce978937ac9" Nov 25 15:28:18 crc kubenswrapper[4806]: I1125 15:28:18.885615 4806 scope.go:117] "RemoveContainer" containerID="db04c7ca2ad0df7c98b812b1531ef2caeaa1884ea73fc8e07fc98d3c06e0e5d0" Nov 25 15:28:48 crc kubenswrapper[4806]: I1125 15:28:48.935268 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:28:48 crc kubenswrapper[4806]: I1125 15:28:48.935857 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:29:01 crc kubenswrapper[4806]: I1125 15:29:01.045979 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7hfp"] Nov 25 15:29:01 crc kubenswrapper[4806]: I1125 15:29:01.061409 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-z7hfp"] Nov 25 15:29:02 crc kubenswrapper[4806]: I1125 15:29:02.109456 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c067c8-89e6-4c27-b894-09ea261d2033" path="/var/lib/kubelet/pods/b8c067c8-89e6-4c27-b894-09ea261d2033/volumes" Nov 25 15:29:03 crc kubenswrapper[4806]: I1125 15:29:03.680797 4806 generic.go:334] "Generic (PLEG): container finished" podID="5ab11811-773f-477f-bb49-59c8dacf771f" containerID="20b08221aad76fe7c42303405ce43744564e1d37a8b9ee64fc751cad891f411e" exitCode=0 Nov 25 15:29:03 crc kubenswrapper[4806]: I1125 15:29:03.680835 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" event={"ID":"5ab11811-773f-477f-bb49-59c8dacf771f","Type":"ContainerDied","Data":"20b08221aad76fe7c42303405ce43744564e1d37a8b9ee64fc751cad891f411e"} Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.178386 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.319050 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key\") pod \"5ab11811-773f-477f-bb49-59c8dacf771f\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.319275 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory\") pod \"5ab11811-773f-477f-bb49-59c8dacf771f\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.319570 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjcmb\" (UniqueName: \"kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb\") pod \"5ab11811-773f-477f-bb49-59c8dacf771f\" (UID: \"5ab11811-773f-477f-bb49-59c8dacf771f\") " Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.335065 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb" (OuterVolumeSpecName: "kube-api-access-fjcmb") pod "5ab11811-773f-477f-bb49-59c8dacf771f" (UID: "5ab11811-773f-477f-bb49-59c8dacf771f"). InnerVolumeSpecName "kube-api-access-fjcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.355113 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5ab11811-773f-477f-bb49-59c8dacf771f" (UID: "5ab11811-773f-477f-bb49-59c8dacf771f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.379494 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory" (OuterVolumeSpecName: "inventory") pod "5ab11811-773f-477f-bb49-59c8dacf771f" (UID: "5ab11811-773f-477f-bb49-59c8dacf771f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.423053 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjcmb\" (UniqueName: \"kubernetes.io/projected/5ab11811-773f-477f-bb49-59c8dacf771f-kube-api-access-fjcmb\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.423105 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.423126 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5ab11811-773f-477f-bb49-59c8dacf771f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.702046 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" event={"ID":"5ab11811-773f-477f-bb49-59c8dacf771f","Type":"ContainerDied","Data":"3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5"} Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.702092 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f2f31e922e46bd032e311fa5e69d3da5fe66aa243c65e93c45491e3a79ca1b5" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.702087 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.814385 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg"] Nov 25 15:29:05 crc kubenswrapper[4806]: E1125 15:29:05.814781 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ab11811-773f-477f-bb49-59c8dacf771f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.814800 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ab11811-773f-477f-bb49-59c8dacf771f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.815014 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ab11811-773f-477f-bb49-59c8dacf771f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.815725 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.818231 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.818543 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.818766 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.820593 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.830898 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg"] Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.935118 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.935201 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xccj6\" (UniqueName: \"kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:05 crc kubenswrapper[4806]: I1125 15:29:05.935273 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.037415 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xccj6\" (UniqueName: \"kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.037547 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.037684 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.041923 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.042561 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.055893 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xccj6\" (UniqueName: \"kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-59ddg\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.134675 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:06 crc kubenswrapper[4806]: W1125 15:29:06.691796 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc9534cb_ed46_40c5_918b_d20679144d6f.slice/crio-408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb WatchSource:0}: Error finding container 408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb: Status 404 returned error can't find the container with id 408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.692136 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg"] Nov 25 15:29:06 crc kubenswrapper[4806]: I1125 15:29:06.712741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" event={"ID":"dc9534cb-ed46-40c5-918b-d20679144d6f","Type":"ContainerStarted","Data":"408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb"} Nov 25 15:29:07 crc kubenswrapper[4806]: I1125 15:29:07.724355 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" event={"ID":"dc9534cb-ed46-40c5-918b-d20679144d6f","Type":"ContainerStarted","Data":"0a7243f0028d64d86e5d0f5f262fd626faaf776946f3ec5aaa9cdb34a9288b0b"} Nov 25 15:29:07 crc kubenswrapper[4806]: I1125 15:29:07.747235 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" podStartSLOduration=2.207403681 podStartE2EDuration="2.747214512s" podCreationTimestamp="2025-11-25 15:29:05 +0000 UTC" firstStartedPulling="2025-11-25 15:29:06.694122683 +0000 UTC m=+2179.346265094" lastFinishedPulling="2025-11-25 15:29:07.233933514 +0000 UTC m=+2179.886075925" observedRunningTime="2025-11-25 15:29:07.738854935 +0000 UTC m=+2180.390997346" watchObservedRunningTime="2025-11-25 15:29:07.747214512 +0000 UTC m=+2180.399356923" Nov 25 15:29:12 crc kubenswrapper[4806]: I1125 15:29:12.775000 4806 generic.go:334] "Generic (PLEG): container finished" podID="dc9534cb-ed46-40c5-918b-d20679144d6f" containerID="0a7243f0028d64d86e5d0f5f262fd626faaf776946f3ec5aaa9cdb34a9288b0b" exitCode=0 Nov 25 15:29:12 crc kubenswrapper[4806]: I1125 15:29:12.775101 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" event={"ID":"dc9534cb-ed46-40c5-918b-d20679144d6f","Type":"ContainerDied","Data":"0a7243f0028d64d86e5d0f5f262fd626faaf776946f3ec5aaa9cdb34a9288b0b"} Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.328701 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.417849 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key\") pod \"dc9534cb-ed46-40c5-918b-d20679144d6f\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.418096 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory\") pod \"dc9534cb-ed46-40c5-918b-d20679144d6f\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.418195 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xccj6\" (UniqueName: \"kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6\") pod \"dc9534cb-ed46-40c5-918b-d20679144d6f\" (UID: \"dc9534cb-ed46-40c5-918b-d20679144d6f\") " Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.428453 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6" (OuterVolumeSpecName: "kube-api-access-xccj6") pod "dc9534cb-ed46-40c5-918b-d20679144d6f" (UID: "dc9534cb-ed46-40c5-918b-d20679144d6f"). InnerVolumeSpecName "kube-api-access-xccj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.453194 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory" (OuterVolumeSpecName: "inventory") pod "dc9534cb-ed46-40c5-918b-d20679144d6f" (UID: "dc9534cb-ed46-40c5-918b-d20679144d6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.457854 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc9534cb-ed46-40c5-918b-d20679144d6f" (UID: "dc9534cb-ed46-40c5-918b-d20679144d6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.521636 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.522031 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xccj6\" (UniqueName: \"kubernetes.io/projected/dc9534cb-ed46-40c5-918b-d20679144d6f-kube-api-access-xccj6\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.522142 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc9534cb-ed46-40c5-918b-d20679144d6f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.797449 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" event={"ID":"dc9534cb-ed46-40c5-918b-d20679144d6f","Type":"ContainerDied","Data":"408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb"} Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.797499 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="408e872896007c6488454721e5cfb4adfb48c8230668966ba9b33ba4a5b67bfb" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.797546 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-59ddg" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.865995 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk"] Nov 25 15:29:14 crc kubenswrapper[4806]: E1125 15:29:14.866527 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc9534cb-ed46-40c5-918b-d20679144d6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.866553 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc9534cb-ed46-40c5-918b-d20679144d6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.866851 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc9534cb-ed46-40c5-918b-d20679144d6f" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.867779 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.871187 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.871412 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.871979 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.872301 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.893885 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk"] Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.930587 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdnw\" (UniqueName: \"kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.930925 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:14 crc kubenswrapper[4806]: I1125 15:29:14.931031 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.032790 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.032894 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.033029 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whdnw\" (UniqueName: \"kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.037005 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.040760 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.051973 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdnw\" (UniqueName: \"kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-dt7mk\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.199905 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.729172 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk"] Nov 25 15:29:15 crc kubenswrapper[4806]: I1125 15:29:15.819334 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" event={"ID":"5874b1c9-f997-4c96-b5a4-b012416932ba","Type":"ContainerStarted","Data":"8beb9c1a451969a34165985eb90b71efe638c408faeac774cea9454f7c4f9e5c"} Nov 25 15:29:16 crc kubenswrapper[4806]: I1125 15:29:16.829982 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" event={"ID":"5874b1c9-f997-4c96-b5a4-b012416932ba","Type":"ContainerStarted","Data":"6aba80275936e38c6bb83439d5e11d5ce07ab74cb138121903929327ba8fc0d4"} Nov 25 15:29:16 crc kubenswrapper[4806]: I1125 15:29:16.853163 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" podStartSLOduration=2.21606465 podStartE2EDuration="2.853138159s" podCreationTimestamp="2025-11-25 15:29:14 +0000 UTC" firstStartedPulling="2025-11-25 15:29:15.731179347 +0000 UTC m=+2188.383321748" lastFinishedPulling="2025-11-25 15:29:16.368252856 +0000 UTC m=+2189.020395257" observedRunningTime="2025-11-25 15:29:16.843936728 +0000 UTC m=+2189.496079139" watchObservedRunningTime="2025-11-25 15:29:16.853138159 +0000 UTC m=+2189.505280570" Nov 25 15:29:18 crc kubenswrapper[4806]: I1125 15:29:18.935132 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:29:18 crc kubenswrapper[4806]: I1125 15:29:18.935474 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:29:19 crc kubenswrapper[4806]: I1125 15:29:19.045391 4806 scope.go:117] "RemoveContainer" containerID="08f6ce0a3f57746056978a8137cdc1c12db6ae61996dc18e050cfd898ca45d62" Nov 25 15:29:32 crc kubenswrapper[4806]: I1125 15:29:32.959139 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:32 crc kubenswrapper[4806]: I1125 15:29:32.964809 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:32 crc kubenswrapper[4806]: I1125 15:29:32.985521 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.034622 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.034723 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qmfz\" (UniqueName: \"kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.034874 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.136927 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.136995 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qmfz\" (UniqueName: \"kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.137026 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.137610 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.137646 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.166709 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qmfz\" (UniqueName: \"kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz\") pod \"redhat-marketplace-qmvx5\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.297014 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:33 crc kubenswrapper[4806]: I1125 15:29:33.815429 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:33 crc kubenswrapper[4806]: W1125 15:29:33.817102 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b459a5_4fe4_40a8_aff3_31cd5891784b.slice/crio-fbe7740058338c8f78da21bf338e4df291ff3a1860005fed899427cc7f918655 WatchSource:0}: Error finding container fbe7740058338c8f78da21bf338e4df291ff3a1860005fed899427cc7f918655: Status 404 returned error can't find the container with id fbe7740058338c8f78da21bf338e4df291ff3a1860005fed899427cc7f918655 Nov 25 15:29:34 crc kubenswrapper[4806]: I1125 15:29:34.006748 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerStarted","Data":"fbe7740058338c8f78da21bf338e4df291ff3a1860005fed899427cc7f918655"} Nov 25 15:29:35 crc kubenswrapper[4806]: I1125 15:29:35.020495 4806 generic.go:334] "Generic (PLEG): container finished" podID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerID="4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c" exitCode=0 Nov 25 15:29:35 crc kubenswrapper[4806]: I1125 15:29:35.020612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerDied","Data":"4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c"} Nov 25 15:29:39 crc kubenswrapper[4806]: I1125 15:29:39.067621 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerStarted","Data":"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4"} Nov 25 15:29:41 crc kubenswrapper[4806]: I1125 15:29:41.086459 4806 generic.go:334] "Generic (PLEG): container finished" podID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerID="914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4" exitCode=0 Nov 25 15:29:41 crc kubenswrapper[4806]: I1125 15:29:41.086530 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerDied","Data":"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4"} Nov 25 15:29:43 crc kubenswrapper[4806]: I1125 15:29:43.121009 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerStarted","Data":"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9"} Nov 25 15:29:43 crc kubenswrapper[4806]: I1125 15:29:43.152832 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qmvx5" podStartSLOduration=3.5687375919999997 podStartE2EDuration="11.152813226s" podCreationTimestamp="2025-11-25 15:29:32 +0000 UTC" firstStartedPulling="2025-11-25 15:29:35.022721656 +0000 UTC m=+2207.674864077" lastFinishedPulling="2025-11-25 15:29:42.60679728 +0000 UTC m=+2215.258939711" observedRunningTime="2025-11-25 15:29:43.147913865 +0000 UTC m=+2215.800056316" watchObservedRunningTime="2025-11-25 15:29:43.152813226 +0000 UTC m=+2215.804955637" Nov 25 15:29:43 crc kubenswrapper[4806]: I1125 15:29:43.297143 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:43 crc kubenswrapper[4806]: I1125 15:29:43.297238 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:44 crc kubenswrapper[4806]: I1125 15:29:44.346809 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-qmvx5" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="registry-server" probeResult="failure" output=< Nov 25 15:29:44 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:29:44 crc kubenswrapper[4806]: > Nov 25 15:29:48 crc kubenswrapper[4806]: I1125 15:29:48.935032 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:29:48 crc kubenswrapper[4806]: I1125 15:29:48.935580 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:29:48 crc kubenswrapper[4806]: I1125 15:29:48.935630 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:29:48 crc kubenswrapper[4806]: I1125 15:29:48.936638 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:29:48 crc kubenswrapper[4806]: I1125 15:29:48.936705 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a" gracePeriod=600 Nov 25 15:29:50 crc kubenswrapper[4806]: I1125 15:29:50.188601 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a" exitCode=0 Nov 25 15:29:50 crc kubenswrapper[4806]: I1125 15:29:50.188678 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a"} Nov 25 15:29:50 crc kubenswrapper[4806]: I1125 15:29:50.188906 4806 scope.go:117] "RemoveContainer" containerID="ecc3d828107059f876e2f284e3f9b578d143aeaad7a17d069f81cf6860e7fd12" Nov 25 15:29:52 crc kubenswrapper[4806]: I1125 15:29:52.215775 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44"} Nov 25 15:29:53 crc kubenswrapper[4806]: I1125 15:29:53.356987 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:53 crc kubenswrapper[4806]: I1125 15:29:53.413654 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:53 crc kubenswrapper[4806]: I1125 15:29:53.591425 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.247467 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qmvx5" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="registry-server" containerID="cri-o://6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9" gracePeriod=2 Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.821281 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.927418 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content\") pod \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.927802 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qmfz\" (UniqueName: \"kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz\") pod \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.927902 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities\") pod \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\" (UID: \"f9b459a5-4fe4-40a8-aff3-31cd5891784b\") " Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.928738 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities" (OuterVolumeSpecName: "utilities") pod "f9b459a5-4fe4-40a8-aff3-31cd5891784b" (UID: "f9b459a5-4fe4-40a8-aff3-31cd5891784b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.934387 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz" (OuterVolumeSpecName: "kube-api-access-6qmfz") pod "f9b459a5-4fe4-40a8-aff3-31cd5891784b" (UID: "f9b459a5-4fe4-40a8-aff3-31cd5891784b"). InnerVolumeSpecName "kube-api-access-6qmfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:55 crc kubenswrapper[4806]: I1125 15:29:55.950520 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9b459a5-4fe4-40a8-aff3-31cd5891784b" (UID: "f9b459a5-4fe4-40a8-aff3-31cd5891784b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.030574 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.030613 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qmfz\" (UniqueName: \"kubernetes.io/projected/f9b459a5-4fe4-40a8-aff3-31cd5891784b-kube-api-access-6qmfz\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.030624 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9b459a5-4fe4-40a8-aff3-31cd5891784b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.266874 4806 generic.go:334] "Generic (PLEG): container finished" podID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerID="6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9" exitCode=0 Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.266933 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qmvx5" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.266930 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerDied","Data":"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9"} Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.267047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qmvx5" event={"ID":"f9b459a5-4fe4-40a8-aff3-31cd5891784b","Type":"ContainerDied","Data":"fbe7740058338c8f78da21bf338e4df291ff3a1860005fed899427cc7f918655"} Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.267075 4806 scope.go:117] "RemoveContainer" containerID="6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.293378 4806 scope.go:117] "RemoveContainer" containerID="914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.295176 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.306562 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qmvx5"] Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.311844 4806 scope.go:117] "RemoveContainer" containerID="4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.367875 4806 scope.go:117] "RemoveContainer" containerID="6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9" Nov 25 15:29:56 crc kubenswrapper[4806]: E1125 15:29:56.368457 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9\": container with ID starting with 6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9 not found: ID does not exist" containerID="6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.368507 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9"} err="failed to get container status \"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9\": rpc error: code = NotFound desc = could not find container \"6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9\": container with ID starting with 6f6d7008c793ffd36e7bf682f3071ce817a0b224f1c5a371fb683f1442d68ee9 not found: ID does not exist" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.368538 4806 scope.go:117] "RemoveContainer" containerID="914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4" Nov 25 15:29:56 crc kubenswrapper[4806]: E1125 15:29:56.368999 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4\": container with ID starting with 914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4 not found: ID does not exist" containerID="914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.369041 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4"} err="failed to get container status \"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4\": rpc error: code = NotFound desc = could not find container \"914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4\": container with ID starting with 914085385de1d007599d592bee4b49f3d4714c9b2b6e93f96beff02c7b686ef4 not found: ID does not exist" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.369070 4806 scope.go:117] "RemoveContainer" containerID="4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c" Nov 25 15:29:56 crc kubenswrapper[4806]: E1125 15:29:56.369359 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c\": container with ID starting with 4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c not found: ID does not exist" containerID="4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c" Nov 25 15:29:56 crc kubenswrapper[4806]: I1125 15:29:56.369421 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c"} err="failed to get container status \"4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c\": rpc error: code = NotFound desc = could not find container \"4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c\": container with ID starting with 4c7bc7f204aacb21f8dcf5595e699bedb2215945ee12955e55a0ee30834c555c not found: ID does not exist" Nov 25 15:29:58 crc kubenswrapper[4806]: I1125 15:29:58.103994 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" path="/var/lib/kubelet/pods/f9b459a5-4fe4-40a8-aff3-31cd5891784b/volumes" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.150839 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h"] Nov 25 15:30:00 crc kubenswrapper[4806]: E1125 15:30:00.151529 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="extract-content" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.151543 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="extract-content" Nov 25 15:30:00 crc kubenswrapper[4806]: E1125 15:30:00.151586 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="registry-server" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.151593 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="registry-server" Nov 25 15:30:00 crc kubenswrapper[4806]: E1125 15:30:00.151610 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="extract-utilities" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.151627 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="extract-utilities" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.151833 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b459a5-4fe4-40a8-aff3-31cd5891784b" containerName="registry-server" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.152661 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.165161 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.165912 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.167500 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h"] Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.230807 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.230893 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpvp2\" (UniqueName: \"kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.231013 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.332774 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpvp2\" (UniqueName: \"kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.332955 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.333036 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.333911 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.340119 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.357054 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpvp2\" (UniqueName: \"kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2\") pod \"collect-profiles-29401410-w5f9h\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.483473 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:00 crc kubenswrapper[4806]: I1125 15:30:00.974299 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h"] Nov 25 15:30:00 crc kubenswrapper[4806]: W1125 15:30:00.975129 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e88f364_f624_4045_98b0_eac1d0afffd0.slice/crio-5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1 WatchSource:0}: Error finding container 5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1: Status 404 returned error can't find the container with id 5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1 Nov 25 15:30:01 crc kubenswrapper[4806]: I1125 15:30:01.319535 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" event={"ID":"0e88f364-f624-4045-98b0-eac1d0afffd0","Type":"ContainerStarted","Data":"9b028f66d209da8307eacc36832ebcb6287fa6f79c463acf0b622be626b5cd67"} Nov 25 15:30:01 crc kubenswrapper[4806]: I1125 15:30:01.319595 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" event={"ID":"0e88f364-f624-4045-98b0-eac1d0afffd0","Type":"ContainerStarted","Data":"5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1"} Nov 25 15:30:02 crc kubenswrapper[4806]: I1125 15:30:02.330936 4806 generic.go:334] "Generic (PLEG): container finished" podID="0e88f364-f624-4045-98b0-eac1d0afffd0" containerID="9b028f66d209da8307eacc36832ebcb6287fa6f79c463acf0b622be626b5cd67" exitCode=0 Nov 25 15:30:02 crc kubenswrapper[4806]: I1125 15:30:02.330992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" event={"ID":"0e88f364-f624-4045-98b0-eac1d0afffd0","Type":"ContainerDied","Data":"9b028f66d209da8307eacc36832ebcb6287fa6f79c463acf0b622be626b5cd67"} Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.759928 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.804818 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume\") pod \"0e88f364-f624-4045-98b0-eac1d0afffd0\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.805400 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume\") pod \"0e88f364-f624-4045-98b0-eac1d0afffd0\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.805596 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpvp2\" (UniqueName: \"kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2\") pod \"0e88f364-f624-4045-98b0-eac1d0afffd0\" (UID: \"0e88f364-f624-4045-98b0-eac1d0afffd0\") " Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.805988 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume" (OuterVolumeSpecName: "config-volume") pod "0e88f364-f624-4045-98b0-eac1d0afffd0" (UID: "0e88f364-f624-4045-98b0-eac1d0afffd0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.806335 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e88f364-f624-4045-98b0-eac1d0afffd0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.812170 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2" (OuterVolumeSpecName: "kube-api-access-gpvp2") pod "0e88f364-f624-4045-98b0-eac1d0afffd0" (UID: "0e88f364-f624-4045-98b0-eac1d0afffd0"). InnerVolumeSpecName "kube-api-access-gpvp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.812292 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0e88f364-f624-4045-98b0-eac1d0afffd0" (UID: "0e88f364-f624-4045-98b0-eac1d0afffd0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.908052 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpvp2\" (UniqueName: \"kubernetes.io/projected/0e88f364-f624-4045-98b0-eac1d0afffd0-kube-api-access-gpvp2\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4806]: I1125 15:30:03.908094 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0e88f364-f624-4045-98b0-eac1d0afffd0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:04 crc kubenswrapper[4806]: I1125 15:30:04.350101 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" event={"ID":"0e88f364-f624-4045-98b0-eac1d0afffd0","Type":"ContainerDied","Data":"5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1"} Nov 25 15:30:04 crc kubenswrapper[4806]: I1125 15:30:04.350149 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f2ed5b13ecc45fad70c3037b07536df254e8d000949b723828814ee46be1ae1" Nov 25 15:30:04 crc kubenswrapper[4806]: I1125 15:30:04.350447 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w5f9h" Nov 25 15:30:04 crc kubenswrapper[4806]: I1125 15:30:04.835582 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4"] Nov 25 15:30:04 crc kubenswrapper[4806]: I1125 15:30:04.843739 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-h6lh4"] Nov 25 15:30:05 crc kubenswrapper[4806]: I1125 15:30:05.368148 4806 generic.go:334] "Generic (PLEG): container finished" podID="5874b1c9-f997-4c96-b5a4-b012416932ba" containerID="6aba80275936e38c6bb83439d5e11d5ce07ab74cb138121903929327ba8fc0d4" exitCode=0 Nov 25 15:30:05 crc kubenswrapper[4806]: I1125 15:30:05.368201 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" event={"ID":"5874b1c9-f997-4c96-b5a4-b012416932ba","Type":"ContainerDied","Data":"6aba80275936e38c6bb83439d5e11d5ce07ab74cb138121903929327ba8fc0d4"} Nov 25 15:30:06 crc kubenswrapper[4806]: I1125 15:30:06.104478 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeac792f-d07c-446b-8dee-00f726ea273c" path="/var/lib/kubelet/pods/eeac792f-d07c-446b-8dee-00f726ea273c/volumes" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.008864 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.070162 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whdnw\" (UniqueName: \"kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw\") pod \"5874b1c9-f997-4c96-b5a4-b012416932ba\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.070402 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key\") pod \"5874b1c9-f997-4c96-b5a4-b012416932ba\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.070443 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory\") pod \"5874b1c9-f997-4c96-b5a4-b012416932ba\" (UID: \"5874b1c9-f997-4c96-b5a4-b012416932ba\") " Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.080580 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw" (OuterVolumeSpecName: "kube-api-access-whdnw") pod "5874b1c9-f997-4c96-b5a4-b012416932ba" (UID: "5874b1c9-f997-4c96-b5a4-b012416932ba"). InnerVolumeSpecName "kube-api-access-whdnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.105621 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5874b1c9-f997-4c96-b5a4-b012416932ba" (UID: "5874b1c9-f997-4c96-b5a4-b012416932ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.109057 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory" (OuterVolumeSpecName: "inventory") pod "5874b1c9-f997-4c96-b5a4-b012416932ba" (UID: "5874b1c9-f997-4c96-b5a4-b012416932ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.173823 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whdnw\" (UniqueName: \"kubernetes.io/projected/5874b1c9-f997-4c96-b5a4-b012416932ba-kube-api-access-whdnw\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.173871 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.173880 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5874b1c9-f997-4c96-b5a4-b012416932ba-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.402384 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" event={"ID":"5874b1c9-f997-4c96-b5a4-b012416932ba","Type":"ContainerDied","Data":"8beb9c1a451969a34165985eb90b71efe638c408faeac774cea9454f7c4f9e5c"} Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.402631 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-dt7mk" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.402709 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8beb9c1a451969a34165985eb90b71efe638c408faeac774cea9454f7c4f9e5c" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.499178 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4"] Nov 25 15:30:07 crc kubenswrapper[4806]: E1125 15:30:07.499681 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5874b1c9-f997-4c96-b5a4-b012416932ba" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.499709 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5874b1c9-f997-4c96-b5a4-b012416932ba" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:07 crc kubenswrapper[4806]: E1125 15:30:07.499729 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e88f364-f624-4045-98b0-eac1d0afffd0" containerName="collect-profiles" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.499736 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e88f364-f624-4045-98b0-eac1d0afffd0" containerName="collect-profiles" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.499934 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5874b1c9-f997-4c96-b5a4-b012416932ba" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.499974 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e88f364-f624-4045-98b0-eac1d0afffd0" containerName="collect-profiles" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.501000 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.508528 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.508836 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.509223 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.509416 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.512992 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4"] Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.581000 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.581112 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.581189 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9f7j\" (UniqueName: \"kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.683225 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.683391 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9f7j\" (UniqueName: \"kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.683503 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.687405 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.688168 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.710209 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9f7j\" (UniqueName: \"kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:07 crc kubenswrapper[4806]: I1125 15:30:07.820727 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:30:08 crc kubenswrapper[4806]: W1125 15:30:08.392529 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c0f0294_9956_4bf5_a1c3_2f7010c70008.slice/crio-8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24 WatchSource:0}: Error finding container 8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24: Status 404 returned error can't find the container with id 8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24 Nov 25 15:30:08 crc kubenswrapper[4806]: I1125 15:30:08.400034 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4"] Nov 25 15:30:08 crc kubenswrapper[4806]: I1125 15:30:08.412213 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" event={"ID":"9c0f0294-9956-4bf5-a1c3-2f7010c70008","Type":"ContainerStarted","Data":"8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24"} Nov 25 15:30:10 crc kubenswrapper[4806]: I1125 15:30:10.439079 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" event={"ID":"9c0f0294-9956-4bf5-a1c3-2f7010c70008","Type":"ContainerStarted","Data":"441773b05d6d8fcc93758264b023d9a9a801642316b75c2f274ea41e4f3ffd3f"} Nov 25 15:30:10 crc kubenswrapper[4806]: I1125 15:30:10.465999 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" podStartSLOduration=2.627120279 podStartE2EDuration="3.465974885s" podCreationTimestamp="2025-11-25 15:30:07 +0000 UTC" firstStartedPulling="2025-11-25 15:30:08.39452114 +0000 UTC m=+2241.046663561" lastFinishedPulling="2025-11-25 15:30:09.233375726 +0000 UTC m=+2241.885518167" observedRunningTime="2025-11-25 15:30:10.461917749 +0000 UTC m=+2243.114060180" watchObservedRunningTime="2025-11-25 15:30:10.465974885 +0000 UTC m=+2243.118117306" Nov 25 15:30:19 crc kubenswrapper[4806]: I1125 15:30:19.117126 4806 scope.go:117] "RemoveContainer" containerID="634d1250bfff81468d7902be16ec50a49c8d117c5155faaca6bf158cdb440fdf" Nov 25 15:30:26 crc kubenswrapper[4806]: I1125 15:30:26.046412 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-x7mzr"] Nov 25 15:30:26 crc kubenswrapper[4806]: I1125 15:30:26.063181 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-x7mzr"] Nov 25 15:30:26 crc kubenswrapper[4806]: I1125 15:30:26.100068 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c180594-82cd-4e18-932d-c5427040362c" path="/var/lib/kubelet/pods/8c180594-82cd-4e18-932d-c5427040362c/volumes" Nov 25 15:30:32 crc kubenswrapper[4806]: I1125 15:30:32.045714 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-l59xr"] Nov 25 15:30:32 crc kubenswrapper[4806]: I1125 15:30:32.062496 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-l59xr"] Nov 25 15:30:32 crc kubenswrapper[4806]: I1125 15:30:32.107307 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa2b367f-df6a-4648-9ed2-e3d1d4a72493" path="/var/lib/kubelet/pods/fa2b367f-df6a-4648-9ed2-e3d1d4a72493/volumes" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.632864 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.636413 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.660683 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.688578 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.688745 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljs5\" (UniqueName: \"kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.688800 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.790294 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.790506 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.790631 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljs5\" (UniqueName: \"kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.790878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.790920 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:48 crc kubenswrapper[4806]: I1125 15:30:48.809040 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljs5\" (UniqueName: \"kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5\") pod \"certified-operators-q8tqp\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:49 crc kubenswrapper[4806]: I1125 15:30:49.012700 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:49 crc kubenswrapper[4806]: I1125 15:30:49.519013 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:30:49 crc kubenswrapper[4806]: W1125 15:30:49.545655 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684c8b96_816f_4116_a5f9_7be11e0b9915.slice/crio-66bdce0d682e1c256879a5bbaea4a90a4bb57f3ac72e6b6e1f1cf91b3d0e97d7 WatchSource:0}: Error finding container 66bdce0d682e1c256879a5bbaea4a90a4bb57f3ac72e6b6e1f1cf91b3d0e97d7: Status 404 returned error can't find the container with id 66bdce0d682e1c256879a5bbaea4a90a4bb57f3ac72e6b6e1f1cf91b3d0e97d7 Nov 25 15:30:49 crc kubenswrapper[4806]: I1125 15:30:49.884608 4806 generic.go:334] "Generic (PLEG): container finished" podID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerID="a2b9431ad1d09b96cdae3229093435754da96e41a66d2226e0cff1ca40c15b82" exitCode=0 Nov 25 15:30:49 crc kubenswrapper[4806]: I1125 15:30:49.884861 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerDied","Data":"a2b9431ad1d09b96cdae3229093435754da96e41a66d2226e0cff1ca40c15b82"} Nov 25 15:30:49 crc kubenswrapper[4806]: I1125 15:30:49.884945 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerStarted","Data":"66bdce0d682e1c256879a5bbaea4a90a4bb57f3ac72e6b6e1f1cf91b3d0e97d7"} Nov 25 15:30:51 crc kubenswrapper[4806]: I1125 15:30:51.941108 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerStarted","Data":"ee0ff5a8e165a44606dd101cd7cc522810f7f992e7d283c6933b378cebaf727c"} Nov 25 15:30:52 crc kubenswrapper[4806]: I1125 15:30:52.952513 4806 generic.go:334] "Generic (PLEG): container finished" podID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerID="ee0ff5a8e165a44606dd101cd7cc522810f7f992e7d283c6933b378cebaf727c" exitCode=0 Nov 25 15:30:52 crc kubenswrapper[4806]: I1125 15:30:52.952600 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerDied","Data":"ee0ff5a8e165a44606dd101cd7cc522810f7f992e7d283c6933b378cebaf727c"} Nov 25 15:30:53 crc kubenswrapper[4806]: I1125 15:30:53.963448 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerStarted","Data":"aa20378377eb7bcbf83ce06a5ac92efc53e0206f69231ebcd36938dc400ac63a"} Nov 25 15:30:53 crc kubenswrapper[4806]: I1125 15:30:53.991904 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q8tqp" podStartSLOduration=2.464514166 podStartE2EDuration="5.99187602s" podCreationTimestamp="2025-11-25 15:30:48 +0000 UTC" firstStartedPulling="2025-11-25 15:30:49.887211144 +0000 UTC m=+2282.539353565" lastFinishedPulling="2025-11-25 15:30:53.414573008 +0000 UTC m=+2286.066715419" observedRunningTime="2025-11-25 15:30:53.982401468 +0000 UTC m=+2286.634543879" watchObservedRunningTime="2025-11-25 15:30:53.99187602 +0000 UTC m=+2286.644018441" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.604637 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x44z7"] Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.607568 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.626177 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x44z7"] Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.652280 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-catalog-content\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.652438 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-utilities\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.652536 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zs8\" (UniqueName: \"kubernetes.io/projected/aab8bc77-d4ee-431c-986b-768bf3c5e139-kube-api-access-99zs8\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.754761 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-utilities\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.754893 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zs8\" (UniqueName: \"kubernetes.io/projected/aab8bc77-d4ee-431c-986b-768bf3c5e139-kube-api-access-99zs8\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.754946 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-catalog-content\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.755498 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-catalog-content\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.755714 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aab8bc77-d4ee-431c-986b-768bf3c5e139-utilities\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.782635 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zs8\" (UniqueName: \"kubernetes.io/projected/aab8bc77-d4ee-431c-986b-768bf3c5e139-kube-api-access-99zs8\") pod \"community-operators-x44z7\" (UID: \"aab8bc77-d4ee-431c-986b-768bf3c5e139\") " pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:55 crc kubenswrapper[4806]: I1125 15:30:55.932879 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:30:56 crc kubenswrapper[4806]: I1125 15:30:56.437063 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x44z7"] Nov 25 15:30:57 crc kubenswrapper[4806]: I1125 15:30:57.015516 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x44z7" event={"ID":"aab8bc77-d4ee-431c-986b-768bf3c5e139","Type":"ContainerStarted","Data":"9009a6579bbe2ff7817318e9056045f5a56779a5515f8a673276305dcb125289"} Nov 25 15:30:58 crc kubenswrapper[4806]: I1125 15:30:58.024879 4806 generic.go:334] "Generic (PLEG): container finished" podID="aab8bc77-d4ee-431c-986b-768bf3c5e139" containerID="006b66ccb001aceb3a70f4b1b01f419023e8ad5359473d981fbda13b8e70bf33" exitCode=0 Nov 25 15:30:58 crc kubenswrapper[4806]: I1125 15:30:58.024967 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x44z7" event={"ID":"aab8bc77-d4ee-431c-986b-768bf3c5e139","Type":"ContainerDied","Data":"006b66ccb001aceb3a70f4b1b01f419023e8ad5359473d981fbda13b8e70bf33"} Nov 25 15:30:59 crc kubenswrapper[4806]: I1125 15:30:59.013554 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:59 crc kubenswrapper[4806]: I1125 15:30:59.013951 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:59 crc kubenswrapper[4806]: I1125 15:30:59.065212 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:30:59 crc kubenswrapper[4806]: I1125 15:30:59.121641 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:31:00 crc kubenswrapper[4806]: I1125 15:31:00.203983 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:31:01 crc kubenswrapper[4806]: I1125 15:31:01.059468 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q8tqp" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="registry-server" containerID="cri-o://aa20378377eb7bcbf83ce06a5ac92efc53e0206f69231ebcd36938dc400ac63a" gracePeriod=2 Nov 25 15:31:02 crc kubenswrapper[4806]: I1125 15:31:02.074220 4806 generic.go:334] "Generic (PLEG): container finished" podID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerID="aa20378377eb7bcbf83ce06a5ac92efc53e0206f69231ebcd36938dc400ac63a" exitCode=0 Nov 25 15:31:02 crc kubenswrapper[4806]: I1125 15:31:02.074281 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerDied","Data":"aa20378377eb7bcbf83ce06a5ac92efc53e0206f69231ebcd36938dc400ac63a"} Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.102029 4806 generic.go:334] "Generic (PLEG): container finished" podID="9c0f0294-9956-4bf5-a1c3-2f7010c70008" containerID="441773b05d6d8fcc93758264b023d9a9a801642316b75c2f274ea41e4f3ffd3f" exitCode=0 Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.102120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" event={"ID":"9c0f0294-9956-4bf5-a1c3-2f7010c70008","Type":"ContainerDied","Data":"441773b05d6d8fcc93758264b023d9a9a801642316b75c2f274ea41e4f3ffd3f"} Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.744301 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.878295 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities\") pod \"684c8b96-816f-4116-a5f9-7be11e0b9915\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.878366 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content\") pod \"684c8b96-816f-4116-a5f9-7be11e0b9915\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.878461 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hljs5\" (UniqueName: \"kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5\") pod \"684c8b96-816f-4116-a5f9-7be11e0b9915\" (UID: \"684c8b96-816f-4116-a5f9-7be11e0b9915\") " Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.878943 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities" (OuterVolumeSpecName: "utilities") pod "684c8b96-816f-4116-a5f9-7be11e0b9915" (UID: "684c8b96-816f-4116-a5f9-7be11e0b9915"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.879133 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.888331 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5" (OuterVolumeSpecName: "kube-api-access-hljs5") pod "684c8b96-816f-4116-a5f9-7be11e0b9915" (UID: "684c8b96-816f-4116-a5f9-7be11e0b9915"). InnerVolumeSpecName "kube-api-access-hljs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.920471 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "684c8b96-816f-4116-a5f9-7be11e0b9915" (UID: "684c8b96-816f-4116-a5f9-7be11e0b9915"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.980798 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/684c8b96-816f-4116-a5f9-7be11e0b9915-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:04 crc kubenswrapper[4806]: I1125 15:31:04.980836 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hljs5\" (UniqueName: \"kubernetes.io/projected/684c8b96-816f-4116-a5f9-7be11e0b9915-kube-api-access-hljs5\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.116550 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q8tqp" event={"ID":"684c8b96-816f-4116-a5f9-7be11e0b9915","Type":"ContainerDied","Data":"66bdce0d682e1c256879a5bbaea4a90a4bb57f3ac72e6b6e1f1cf91b3d0e97d7"} Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.116613 4806 scope.go:117] "RemoveContainer" containerID="aa20378377eb7bcbf83ce06a5ac92efc53e0206f69231ebcd36938dc400ac63a" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.116634 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q8tqp" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.181529 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.184563 4806 scope.go:117] "RemoveContainer" containerID="ee0ff5a8e165a44606dd101cd7cc522810f7f992e7d283c6933b378cebaf727c" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.191979 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q8tqp"] Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.240593 4806 scope.go:117] "RemoveContainer" containerID="a2b9431ad1d09b96cdae3229093435754da96e41a66d2226e0cff1ca40c15b82" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.711077 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.799655 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9f7j\" (UniqueName: \"kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j\") pod \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.800123 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key\") pod \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.800195 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory\") pod \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\" (UID: \"9c0f0294-9956-4bf5-a1c3-2f7010c70008\") " Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.805660 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j" (OuterVolumeSpecName: "kube-api-access-h9f7j") pod "9c0f0294-9956-4bf5-a1c3-2f7010c70008" (UID: "9c0f0294-9956-4bf5-a1c3-2f7010c70008"). InnerVolumeSpecName "kube-api-access-h9f7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.828670 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory" (OuterVolumeSpecName: "inventory") pod "9c0f0294-9956-4bf5-a1c3-2f7010c70008" (UID: "9c0f0294-9956-4bf5-a1c3-2f7010c70008"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.856546 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9c0f0294-9956-4bf5-a1c3-2f7010c70008" (UID: "9c0f0294-9956-4bf5-a1c3-2f7010c70008"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.902310 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.902361 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9c0f0294-9956-4bf5-a1c3-2f7010c70008-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:05 crc kubenswrapper[4806]: I1125 15:31:05.902372 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9f7j\" (UniqueName: \"kubernetes.io/projected/9c0f0294-9956-4bf5-a1c3-2f7010c70008-kube-api-access-h9f7j\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.101706 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" path="/var/lib/kubelet/pods/684c8b96-816f-4116-a5f9-7be11e0b9915/volumes" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.130227 4806 generic.go:334] "Generic (PLEG): container finished" podID="aab8bc77-d4ee-431c-986b-768bf3c5e139" containerID="0ebe8df5727179caee2b55c1fd97b225f5dd252fac3735dcb75c0d09aabd4b78" exitCode=0 Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.130307 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x44z7" event={"ID":"aab8bc77-d4ee-431c-986b-768bf3c5e139","Type":"ContainerDied","Data":"0ebe8df5727179caee2b55c1fd97b225f5dd252fac3735dcb75c0d09aabd4b78"} Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.134925 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" event={"ID":"9c0f0294-9956-4bf5-a1c3-2f7010c70008","Type":"ContainerDied","Data":"8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24"} Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.134962 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.134964 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fdbfdf47e22f97caed706ee0a61d11770afe7dbfe2752a68498d2b7c354cb24" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.208443 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qbtlz"] Nov 25 15:31:06 crc kubenswrapper[4806]: E1125 15:31:06.209123 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="extract-utilities" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209163 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="extract-utilities" Nov 25 15:31:06 crc kubenswrapper[4806]: E1125 15:31:06.209202 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0f0294-9956-4bf5-a1c3-2f7010c70008" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209217 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0f0294-9956-4bf5-a1c3-2f7010c70008" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:06 crc kubenswrapper[4806]: E1125 15:31:06.209248 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="extract-content" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209259 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="extract-content" Nov 25 15:31:06 crc kubenswrapper[4806]: E1125 15:31:06.209280 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="registry-server" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209292 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="registry-server" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209671 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="684c8b96-816f-4116-a5f9-7be11e0b9915" containerName="registry-server" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.209718 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0f0294-9956-4bf5-a1c3-2f7010c70008" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.210897 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.214418 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.214669 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.214672 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.218726 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.219689 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qbtlz"] Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.309904 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.310377 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.310578 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f5j2\" (UniqueName: \"kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.412389 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.412580 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f5j2\" (UniqueName: \"kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.412757 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.417269 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.422986 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.429648 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f5j2\" (UniqueName: \"kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2\") pod \"ssh-known-hosts-edpm-deployment-qbtlz\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:06 crc kubenswrapper[4806]: I1125 15:31:06.535734 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:07 crc kubenswrapper[4806]: I1125 15:31:07.148936 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-qbtlz"] Nov 25 15:31:08 crc kubenswrapper[4806]: I1125 15:31:08.163220 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" event={"ID":"0d16f874-9406-497e-ad89-6e5ce5c109f5","Type":"ContainerStarted","Data":"021ff308303f5e8ba2a59b8c2ec11a7dabd44c9b31642c84efbb5c29f7c8a072"} Nov 25 15:31:11 crc kubenswrapper[4806]: I1125 15:31:11.195935 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" event={"ID":"0d16f874-9406-497e-ad89-6e5ce5c109f5","Type":"ContainerStarted","Data":"81fe1999cb8da55daddc6ec5c30cf97f22ef7249b0262a522fb136323a858839"} Nov 25 15:31:11 crc kubenswrapper[4806]: I1125 15:31:11.199841 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x44z7" event={"ID":"aab8bc77-d4ee-431c-986b-768bf3c5e139","Type":"ContainerStarted","Data":"50abd896d1e173f7da6ddf6de6524547f7f1ab49849e564945ac70dd1793b334"} Nov 25 15:31:11 crc kubenswrapper[4806]: I1125 15:31:11.214496 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" podStartSLOduration=4.388289761 podStartE2EDuration="5.214476775s" podCreationTimestamp="2025-11-25 15:31:06 +0000 UTC" firstStartedPulling="2025-11-25 15:31:09.510968552 +0000 UTC m=+2302.163110983" lastFinishedPulling="2025-11-25 15:31:10.337155596 +0000 UTC m=+2302.989297997" observedRunningTime="2025-11-25 15:31:11.209895324 +0000 UTC m=+2303.862037755" watchObservedRunningTime="2025-11-25 15:31:11.214476775 +0000 UTC m=+2303.866619186" Nov 25 15:31:11 crc kubenswrapper[4806]: I1125 15:31:11.238766 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x44z7" podStartSLOduration=4.384789199 podStartE2EDuration="16.238747401s" podCreationTimestamp="2025-11-25 15:30:55 +0000 UTC" firstStartedPulling="2025-11-25 15:30:58.026681083 +0000 UTC m=+2290.678823494" lastFinishedPulling="2025-11-25 15:31:09.880639285 +0000 UTC m=+2302.532781696" observedRunningTime="2025-11-25 15:31:11.231698549 +0000 UTC m=+2303.883840970" watchObservedRunningTime="2025-11-25 15:31:11.238747401 +0000 UTC m=+2303.890889812" Nov 25 15:31:15 crc kubenswrapper[4806]: I1125 15:31:15.934049 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:31:15 crc kubenswrapper[4806]: I1125 15:31:15.934601 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:31:15 crc kubenswrapper[4806]: I1125 15:31:15.987884 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:31:16 crc kubenswrapper[4806]: I1125 15:31:16.334726 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x44z7" Nov 25 15:31:16 crc kubenswrapper[4806]: I1125 15:31:16.404090 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x44z7"] Nov 25 15:31:16 crc kubenswrapper[4806]: I1125 15:31:16.493050 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 15:31:16 crc kubenswrapper[4806]: I1125 15:31:16.493274 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksqkw" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="registry-server" containerID="cri-o://bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5" gracePeriod=2 Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.077479 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.185138 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities\") pod \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.185186 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlw75\" (UniqueName: \"kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75\") pod \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.185298 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content\") pod \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\" (UID: \"29afdfec-4b9d-40b8-a63d-11ffb2f170c1\") " Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.188271 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities" (OuterVolumeSpecName: "utilities") pod "29afdfec-4b9d-40b8-a63d-11ffb2f170c1" (UID: "29afdfec-4b9d-40b8-a63d-11ffb2f170c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.194467 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75" (OuterVolumeSpecName: "kube-api-access-mlw75") pod "29afdfec-4b9d-40b8-a63d-11ffb2f170c1" (UID: "29afdfec-4b9d-40b8-a63d-11ffb2f170c1"). InnerVolumeSpecName "kube-api-access-mlw75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.270037 4806 generic.go:334] "Generic (PLEG): container finished" podID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerID="bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5" exitCode=0 Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.270143 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksqkw" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.270147 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerDied","Data":"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5"} Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.270215 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksqkw" event={"ID":"29afdfec-4b9d-40b8-a63d-11ffb2f170c1","Type":"ContainerDied","Data":"31fd8bc76da412ddfdf80d65e5803779d94401558d92f9b6c1cf4d34dc820abc"} Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.270244 4806 scope.go:117] "RemoveContainer" containerID="bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.272359 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29afdfec-4b9d-40b8-a63d-11ffb2f170c1" (UID: "29afdfec-4b9d-40b8-a63d-11ffb2f170c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.287813 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.287847 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlw75\" (UniqueName: \"kubernetes.io/projected/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-kube-api-access-mlw75\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.287856 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29afdfec-4b9d-40b8-a63d-11ffb2f170c1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.293581 4806 scope.go:117] "RemoveContainer" containerID="58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.329006 4806 scope.go:117] "RemoveContainer" containerID="0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.372559 4806 scope.go:117] "RemoveContainer" containerID="bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5" Nov 25 15:31:17 crc kubenswrapper[4806]: E1125 15:31:17.373222 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5\": container with ID starting with bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5 not found: ID does not exist" containerID="bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.373265 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5"} err="failed to get container status \"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5\": rpc error: code = NotFound desc = could not find container \"bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5\": container with ID starting with bfbe5749cc6af051e29c798ff223b19e8dd6ae2cd728a889fb2de00cc9ef89e5 not found: ID does not exist" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.373294 4806 scope.go:117] "RemoveContainer" containerID="58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32" Nov 25 15:31:17 crc kubenswrapper[4806]: E1125 15:31:17.374814 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32\": container with ID starting with 58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32 not found: ID does not exist" containerID="58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.374855 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32"} err="failed to get container status \"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32\": rpc error: code = NotFound desc = could not find container \"58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32\": container with ID starting with 58ac16695b35bc00332ce2d6b5a6f3733b63c036cf23b5dfe22c7293beddab32 not found: ID does not exist" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.374884 4806 scope.go:117] "RemoveContainer" containerID="0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9" Nov 25 15:31:17 crc kubenswrapper[4806]: E1125 15:31:17.375202 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9\": container with ID starting with 0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9 not found: ID does not exist" containerID="0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.375231 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9"} err="failed to get container status \"0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9\": rpc error: code = NotFound desc = could not find container \"0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9\": container with ID starting with 0a967d75df7246b7e13f7efc452ffae1f15f801788cab7735e4193fee33e1bb9 not found: ID does not exist" Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.608364 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 15:31:17 crc kubenswrapper[4806]: I1125 15:31:17.620175 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksqkw"] Nov 25 15:31:18 crc kubenswrapper[4806]: I1125 15:31:18.105842 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" path="/var/lib/kubelet/pods/29afdfec-4b9d-40b8-a63d-11ffb2f170c1/volumes" Nov 25 15:31:18 crc kubenswrapper[4806]: I1125 15:31:18.316432 4806 generic.go:334] "Generic (PLEG): container finished" podID="0d16f874-9406-497e-ad89-6e5ce5c109f5" containerID="81fe1999cb8da55daddc6ec5c30cf97f22ef7249b0262a522fb136323a858839" exitCode=0 Nov 25 15:31:18 crc kubenswrapper[4806]: I1125 15:31:18.317485 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" event={"ID":"0d16f874-9406-497e-ad89-6e5ce5c109f5","Type":"ContainerDied","Data":"81fe1999cb8da55daddc6ec5c30cf97f22ef7249b0262a522fb136323a858839"} Nov 25 15:31:19 crc kubenswrapper[4806]: I1125 15:31:19.233460 4806 scope.go:117] "RemoveContainer" containerID="35326fe0bfbfc0029635f575b6261d37eae34b70d75a24cc28e3d756f8c7383c" Nov 25 15:31:19 crc kubenswrapper[4806]: I1125 15:31:19.291583 4806 scope.go:117] "RemoveContainer" containerID="90ddb67e72a52d632d3f29a549cc1cf6282f72c02eb024fc3b25ca819a101978" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.085907 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.241281 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam\") pod \"0d16f874-9406-497e-ad89-6e5ce5c109f5\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.241645 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0\") pod \"0d16f874-9406-497e-ad89-6e5ce5c109f5\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.241781 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f5j2\" (UniqueName: \"kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2\") pod \"0d16f874-9406-497e-ad89-6e5ce5c109f5\" (UID: \"0d16f874-9406-497e-ad89-6e5ce5c109f5\") " Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.249723 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2" (OuterVolumeSpecName: "kube-api-access-4f5j2") pod "0d16f874-9406-497e-ad89-6e5ce5c109f5" (UID: "0d16f874-9406-497e-ad89-6e5ce5c109f5"). InnerVolumeSpecName "kube-api-access-4f5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.298272 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "0d16f874-9406-497e-ad89-6e5ce5c109f5" (UID: "0d16f874-9406-497e-ad89-6e5ce5c109f5"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.325646 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0d16f874-9406-497e-ad89-6e5ce5c109f5" (UID: "0d16f874-9406-497e-ad89-6e5ce5c109f5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.344869 4806 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.345275 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f5j2\" (UniqueName: \"kubernetes.io/projected/0d16f874-9406-497e-ad89-6e5ce5c109f5-kube-api-access-4f5j2\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.345287 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0d16f874-9406-497e-ad89-6e5ce5c109f5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.352418 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" event={"ID":"0d16f874-9406-497e-ad89-6e5ce5c109f5","Type":"ContainerDied","Data":"021ff308303f5e8ba2a59b8c2ec11a7dabd44c9b31642c84efbb5c29f7c8a072"} Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.352465 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021ff308303f5e8ba2a59b8c2ec11a7dabd44c9b31642c84efbb5c29f7c8a072" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.352539 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-qbtlz" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.456453 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44"] Nov 25 15:31:20 crc kubenswrapper[4806]: E1125 15:31:20.456856 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="extract-utilities" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.456874 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="extract-utilities" Nov 25 15:31:20 crc kubenswrapper[4806]: E1125 15:31:20.456905 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="registry-server" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.456913 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="registry-server" Nov 25 15:31:20 crc kubenswrapper[4806]: E1125 15:31:20.456932 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d16f874-9406-497e-ad89-6e5ce5c109f5" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.456938 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d16f874-9406-497e-ad89-6e5ce5c109f5" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:31:20 crc kubenswrapper[4806]: E1125 15:31:20.456958 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="extract-content" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.456965 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="extract-content" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.457177 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="29afdfec-4b9d-40b8-a63d-11ffb2f170c1" containerName="registry-server" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.457199 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d16f874-9406-497e-ad89-6e5ce5c109f5" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.457974 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.461753 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.461950 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.462115 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.462254 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.511242 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44"] Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.652825 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.652877 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.652924 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjvd\" (UniqueName: \"kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.755608 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjvd\" (UniqueName: \"kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.755810 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.755854 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.773227 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.773227 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.787016 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjvd\" (UniqueName: \"kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-dtz44\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:20 crc kubenswrapper[4806]: I1125 15:31:20.831401 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:21 crc kubenswrapper[4806]: I1125 15:31:21.417549 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44"] Nov 25 15:31:21 crc kubenswrapper[4806]: W1125 15:31:21.425854 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ab72e48_ad31_4614_a3a0_44f0dd9762a9.slice/crio-61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9 WatchSource:0}: Error finding container 61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9: Status 404 returned error can't find the container with id 61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9 Nov 25 15:31:22 crc kubenswrapper[4806]: I1125 15:31:22.373591 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" event={"ID":"6ab72e48-ad31-4614-a3a0-44f0dd9762a9","Type":"ContainerStarted","Data":"61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9"} Nov 25 15:31:22 crc kubenswrapper[4806]: E1125 15:31:22.807795 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = parsing image configuration: Get \"https://cdn01.quay.io/quayio-production-s3/sha256/70/707c3b9a8ea6ae2dd3165a057598d2caaf3bd7c561244a499f303590b4ddfe38?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251125%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251125T153121Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=77b89f403023e0817ff7229293d788802195b89192730b5bd8f4a1c36bc7b743®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-ansibleee-runner&akamai_signature=exp=1764085581~hmac=17d1891a6768683978b6c5282eadc5af65f7af5e1c7e4c1a36a13d4b5edab66c\": remote error: tls: internal error" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Nov 25 15:31:22 crc kubenswrapper[4806]: E1125 15:31:22.808068 4806 kuberuntime_manager.go:1274] "Unhandled Error" err=< Nov 25 15:31:22 crc kubenswrapper[4806]: container &Container{Name:run-os-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p osp.edpm.run_os -i run-os-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Nov 25 15:31:22 crc kubenswrapper[4806]: osp.edpm.run_os Nov 25 15:31:22 crc kubenswrapper[4806]: Nov 25 15:31:22 crc kubenswrapper[4806]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Nov 25 15:31:22 crc kubenswrapper[4806]: edpm_override_hosts: openstack-edpm-ipam Nov 25 15:31:22 crc kubenswrapper[4806]: edpm_service_type: run-os Nov 25 15:31:22 crc kubenswrapper[4806]: Nov 25 15:31:22 crc kubenswrapper[4806]: Nov 25 15:31:22 crc kubenswrapper[4806]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/runner/env/ssh_key,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ffjvd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod run-os-edpm-deployment-openstack-edpm-ipam-dtz44_openstack(6ab72e48-ad31-4614-a3a0-44f0dd9762a9): ErrImagePull: parsing image configuration: Get "https://cdn01.quay.io/quayio-production-s3/sha256/70/707c3b9a8ea6ae2dd3165a057598d2caaf3bd7c561244a499f303590b4ddfe38?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251125%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251125T153121Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=77b89f403023e0817ff7229293d788802195b89192730b5bd8f4a1c36bc7b743®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-ansibleee-runner&akamai_signature=exp=1764085581~hmac=17d1891a6768683978b6c5282eadc5af65f7af5e1c7e4c1a36a13d4b5edab66c": remote error: tls: internal error Nov 25 15:31:22 crc kubenswrapper[4806]: > logger="UnhandledError" Nov 25 15:31:22 crc kubenswrapper[4806]: E1125 15:31:22.809388 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"run-os-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"parsing image configuration: Get \\\"https://cdn01.quay.io/quayio-production-s3/sha256/70/707c3b9a8ea6ae2dd3165a057598d2caaf3bd7c561244a499f303590b4ddfe38?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIATAAF2YHTGR23ZTE6%2F20251125%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20251125T153121Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=77b89f403023e0817ff7229293d788802195b89192730b5bd8f4a1c36bc7b743®ion=us-east-1&namespace=openstack-k8s-operators&username=openshift-release-dev+ocm_access_1b89217552bc42d1be3fb06a1aed001a&repo_name=openstack-ansibleee-runner&akamai_signature=exp=1764085581~hmac=17d1891a6768683978b6c5282eadc5af65f7af5e1c7e4c1a36a13d4b5edab66c\\\": remote error: tls: internal error\"" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" podUID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" Nov 25 15:31:23 crc kubenswrapper[4806]: E1125 15:31:23.392694 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"run-os-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" podUID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" Nov 25 15:31:40 crc kubenswrapper[4806]: I1125 15:31:40.575278 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" event={"ID":"6ab72e48-ad31-4614-a3a0-44f0dd9762a9","Type":"ContainerStarted","Data":"c88ec63f6f465c8e5fc1f18eb8134bc4e5863de39378b6ddf15275ff7399d6e4"} Nov 25 15:31:40 crc kubenswrapper[4806]: I1125 15:31:40.601501 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" podStartSLOduration=2.829116971 podStartE2EDuration="20.601480001s" podCreationTimestamp="2025-11-25 15:31:20 +0000 UTC" firstStartedPulling="2025-11-25 15:31:21.428838037 +0000 UTC m=+2314.080980448" lastFinishedPulling="2025-11-25 15:31:39.201201057 +0000 UTC m=+2331.853343478" observedRunningTime="2025-11-25 15:31:40.594526541 +0000 UTC m=+2333.246668962" watchObservedRunningTime="2025-11-25 15:31:40.601480001 +0000 UTC m=+2333.253622412" Nov 25 15:31:48 crc kubenswrapper[4806]: I1125 15:31:48.666103 4806 generic.go:334] "Generic (PLEG): container finished" podID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" containerID="c88ec63f6f465c8e5fc1f18eb8134bc4e5863de39378b6ddf15275ff7399d6e4" exitCode=0 Nov 25 15:31:48 crc kubenswrapper[4806]: I1125 15:31:48.666874 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" event={"ID":"6ab72e48-ad31-4614-a3a0-44f0dd9762a9","Type":"ContainerDied","Data":"c88ec63f6f465c8e5fc1f18eb8134bc4e5863de39378b6ddf15275ff7399d6e4"} Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.315825 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.476281 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key\") pod \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.476371 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory\") pod \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.476480 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjvd\" (UniqueName: \"kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd\") pod \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\" (UID: \"6ab72e48-ad31-4614-a3a0-44f0dd9762a9\") " Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.487425 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd" (OuterVolumeSpecName: "kube-api-access-ffjvd") pod "6ab72e48-ad31-4614-a3a0-44f0dd9762a9" (UID: "6ab72e48-ad31-4614-a3a0-44f0dd9762a9"). InnerVolumeSpecName "kube-api-access-ffjvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.509528 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory" (OuterVolumeSpecName: "inventory") pod "6ab72e48-ad31-4614-a3a0-44f0dd9762a9" (UID: "6ab72e48-ad31-4614-a3a0-44f0dd9762a9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.511339 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ab72e48-ad31-4614-a3a0-44f0dd9762a9" (UID: "6ab72e48-ad31-4614-a3a0-44f0dd9762a9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.581063 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.581107 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.581121 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjvd\" (UniqueName: \"kubernetes.io/projected/6ab72e48-ad31-4614-a3a0-44f0dd9762a9-kube-api-access-ffjvd\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.690386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" event={"ID":"6ab72e48-ad31-4614-a3a0-44f0dd9762a9","Type":"ContainerDied","Data":"61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9"} Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.690431 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ee91c60dac4f596d9056b92ab143f1e38ca492362985abfdb2181d4890d9d9" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.690467 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-dtz44" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.789976 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk"] Nov 25 15:31:50 crc kubenswrapper[4806]: E1125 15:31:50.790385 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.790404 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.790602 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab72e48-ad31-4614-a3a0-44f0dd9762a9" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.791277 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.793450 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.793494 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.794019 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.794474 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.822030 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk"] Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.988254 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75g2l\" (UniqueName: \"kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.988332 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:50 crc kubenswrapper[4806]: I1125 15:31:50.988444 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.090496 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.091472 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75g2l\" (UniqueName: \"kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.091546 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.094210 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.094742 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.152821 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75g2l\" (UniqueName: \"kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:51 crc kubenswrapper[4806]: I1125 15:31:51.409736 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:31:52 crc kubenswrapper[4806]: I1125 15:31:52.058157 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk"] Nov 25 15:31:52 crc kubenswrapper[4806]: I1125 15:31:52.711479 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" event={"ID":"2f849708-31fc-45af-8eb8-75bd30094be9","Type":"ContainerStarted","Data":"39102b5a7b1ebca27881508f33e56ce84bcf51d99e5fe2c389c74a476c46008b"} Nov 25 15:31:54 crc kubenswrapper[4806]: I1125 15:31:54.731602 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" event={"ID":"2f849708-31fc-45af-8eb8-75bd30094be9","Type":"ContainerStarted","Data":"99ed5e0bf050cbb3844ec48cbe9afc3f22f542bd454642ff5eff1fe7477a3488"} Nov 25 15:31:54 crc kubenswrapper[4806]: I1125 15:31:54.751438 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" podStartSLOduration=3.320535331 podStartE2EDuration="4.751415701s" podCreationTimestamp="2025-11-25 15:31:50 +0000 UTC" firstStartedPulling="2025-11-25 15:31:52.056951315 +0000 UTC m=+2344.709093726" lastFinishedPulling="2025-11-25 15:31:53.487831665 +0000 UTC m=+2346.139974096" observedRunningTime="2025-11-25 15:31:54.74858333 +0000 UTC m=+2347.400725761" watchObservedRunningTime="2025-11-25 15:31:54.751415701 +0000 UTC m=+2347.403558122" Nov 25 15:32:03 crc kubenswrapper[4806]: I1125 15:32:03.847240 4806 generic.go:334] "Generic (PLEG): container finished" podID="2f849708-31fc-45af-8eb8-75bd30094be9" containerID="99ed5e0bf050cbb3844ec48cbe9afc3f22f542bd454642ff5eff1fe7477a3488" exitCode=0 Nov 25 15:32:03 crc kubenswrapper[4806]: I1125 15:32:03.847429 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" event={"ID":"2f849708-31fc-45af-8eb8-75bd30094be9","Type":"ContainerDied","Data":"99ed5e0bf050cbb3844ec48cbe9afc3f22f542bd454642ff5eff1fe7477a3488"} Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.402921 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.506408 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75g2l\" (UniqueName: \"kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l\") pod \"2f849708-31fc-45af-8eb8-75bd30094be9\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.506664 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key\") pod \"2f849708-31fc-45af-8eb8-75bd30094be9\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.506697 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory\") pod \"2f849708-31fc-45af-8eb8-75bd30094be9\" (UID: \"2f849708-31fc-45af-8eb8-75bd30094be9\") " Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.513700 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l" (OuterVolumeSpecName: "kube-api-access-75g2l") pod "2f849708-31fc-45af-8eb8-75bd30094be9" (UID: "2f849708-31fc-45af-8eb8-75bd30094be9"). InnerVolumeSpecName "kube-api-access-75g2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.544636 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f849708-31fc-45af-8eb8-75bd30094be9" (UID: "2f849708-31fc-45af-8eb8-75bd30094be9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.571806 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory" (OuterVolumeSpecName: "inventory") pod "2f849708-31fc-45af-8eb8-75bd30094be9" (UID: "2f849708-31fc-45af-8eb8-75bd30094be9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.609968 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75g2l\" (UniqueName: \"kubernetes.io/projected/2f849708-31fc-45af-8eb8-75bd30094be9-kube-api-access-75g2l\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.610650 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.610705 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f849708-31fc-45af-8eb8-75bd30094be9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.868901 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" event={"ID":"2f849708-31fc-45af-8eb8-75bd30094be9","Type":"ContainerDied","Data":"39102b5a7b1ebca27881508f33e56ce84bcf51d99e5fe2c389c74a476c46008b"} Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.868945 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39102b5a7b1ebca27881508f33e56ce84bcf51d99e5fe2c389c74a476c46008b" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.868957 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.984403 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm"] Nov 25 15:32:05 crc kubenswrapper[4806]: E1125 15:32:05.984890 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f849708-31fc-45af-8eb8-75bd30094be9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.984908 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f849708-31fc-45af-8eb8-75bd30094be9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.985188 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f849708-31fc-45af-8eb8-75bd30094be9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.986200 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.993331 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.993357 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.993836 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.996704 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.997692 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.998076 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.998358 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:32:05 crc kubenswrapper[4806]: I1125 15:32:05.998402 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.011164 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm"] Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119049 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119101 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119135 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119522 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119569 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119619 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119640 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119694 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfd8q\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119714 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119728 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119793 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119817 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119849 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.119924 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.221949 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfd8q\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222028 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222077 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222260 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222373 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222469 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.222622 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.223589 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.223678 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.223705 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.223752 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.223820 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.224230 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.224281 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.228974 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.229478 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.229542 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.230037 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.230333 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.230718 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.230813 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.231018 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.231691 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.232544 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.232878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.233015 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.234061 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.242358 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfd8q\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-l96zm\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.308271 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.901822 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm"] Nov 25 15:32:06 crc kubenswrapper[4806]: W1125 15:32:06.911621 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0 WatchSource:0}: Error finding container 5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0: Status 404 returned error can't find the container with id 5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0 Nov 25 15:32:06 crc kubenswrapper[4806]: I1125 15:32:06.913786 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:32:07 crc kubenswrapper[4806]: I1125 15:32:07.895268 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" event={"ID":"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b","Type":"ContainerStarted","Data":"27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc"} Nov 25 15:32:07 crc kubenswrapper[4806]: I1125 15:32:07.895883 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" event={"ID":"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b","Type":"ContainerStarted","Data":"5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0"} Nov 25 15:32:07 crc kubenswrapper[4806]: I1125 15:32:07.929659 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" podStartSLOduration=2.475346763 podStartE2EDuration="2.92963218s" podCreationTimestamp="2025-11-25 15:32:05 +0000 UTC" firstStartedPulling="2025-11-25 15:32:06.913514504 +0000 UTC m=+2359.565656915" lastFinishedPulling="2025-11-25 15:32:07.367799911 +0000 UTC m=+2360.019942332" observedRunningTime="2025-11-25 15:32:07.92192964 +0000 UTC m=+2360.574072061" watchObservedRunningTime="2025-11-25 15:32:07.92963218 +0000 UTC m=+2360.581774601" Nov 25 15:32:18 crc kubenswrapper[4806]: I1125 15:32:18.935104 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:32:18 crc kubenswrapper[4806]: I1125 15:32:18.935644 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:32:48 crc kubenswrapper[4806]: I1125 15:32:48.157767 4806 generic.go:334] "Generic (PLEG): container finished" podID="d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" containerID="27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc" exitCode=0 Nov 25 15:32:48 crc kubenswrapper[4806]: I1125 15:32:48.157864 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" event={"ID":"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b","Type":"ContainerDied","Data":"27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc"} Nov 25 15:32:48 crc kubenswrapper[4806]: I1125 15:32:48.934994 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:32:48 crc kubenswrapper[4806]: I1125 15:32:48.935465 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.697585 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781117 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781176 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781255 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfd8q\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781293 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781372 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781408 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781534 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781576 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781610 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781641 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781675 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781721 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.781826 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.782792 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle\") pod \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\" (UID: \"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b\") " Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.789390 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.789416 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.789753 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.790525 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q" (OuterVolumeSpecName: "kube-api-access-tfd8q") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "kube-api-access-tfd8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.791018 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.791895 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.794195 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.796976 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.797068 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.809778 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.809807 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.809863 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.823635 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.826102 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory" (OuterVolumeSpecName: "inventory") pod "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" (UID: "d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887728 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887771 4806 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887788 4806 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887803 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887820 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887834 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfd8q\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-kube-api-access-tfd8q\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887847 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887858 4806 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887870 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887888 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887900 4806 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887912 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887927 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:49 crc kubenswrapper[4806]: I1125 15:32:49.887939 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.177787 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" event={"ID":"d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b","Type":"ContainerDied","Data":"5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0"} Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.177825 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f0f0d36eab553a2f32239170c42f400570fa136937b33942b2aae9acb9e6ec0" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.177880 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-l96zm" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.309409 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn"] Nov 25 15:32:50 crc kubenswrapper[4806]: E1125 15:32:50.310011 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.310044 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.310407 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.311411 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.314944 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.315032 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.315174 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.315476 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.317381 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.324814 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn"] Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.401509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.401853 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.401895 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.401922 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw6p6\" (UniqueName: \"kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.401946 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.504188 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.504352 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.504424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.504476 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw6p6\" (UniqueName: \"kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.504514 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.506303 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.508887 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.509055 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.509754 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.533622 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw6p6\" (UniqueName: \"kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-xjxhn\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:50 crc kubenswrapper[4806]: I1125 15:32:50.635662 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:32:51 crc kubenswrapper[4806]: I1125 15:32:51.233271 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn"] Nov 25 15:32:51 crc kubenswrapper[4806]: W1125 15:32:51.238000 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69414d23_6d19_459c_8930_73ad33dd73e5.slice/crio-5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850 WatchSource:0}: Error finding container 5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850: Status 404 returned error can't find the container with id 5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850 Nov 25 15:32:52 crc kubenswrapper[4806]: I1125 15:32:52.204750 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" event={"ID":"69414d23-6d19-459c-8930-73ad33dd73e5","Type":"ContainerStarted","Data":"391e8665574caa147b8f1f18e5e1ea0ae2e64351d6230d4d466cc33accb5dfba"} Nov 25 15:32:52 crc kubenswrapper[4806]: I1125 15:32:52.204825 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" event={"ID":"69414d23-6d19-459c-8930-73ad33dd73e5","Type":"ContainerStarted","Data":"5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850"} Nov 25 15:32:52 crc kubenswrapper[4806]: I1125 15:32:52.228694 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" podStartSLOduration=1.776965971 podStartE2EDuration="2.228670784s" podCreationTimestamp="2025-11-25 15:32:50 +0000 UTC" firstStartedPulling="2025-11-25 15:32:51.241915569 +0000 UTC m=+2403.894057990" lastFinishedPulling="2025-11-25 15:32:51.693620392 +0000 UTC m=+2404.345762803" observedRunningTime="2025-11-25 15:32:52.223756583 +0000 UTC m=+2404.875899014" watchObservedRunningTime="2025-11-25 15:32:52.228670784 +0000 UTC m=+2404.880813205" Nov 25 15:32:52 crc kubenswrapper[4806]: E1125 15:32:52.654251 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:02 crc kubenswrapper[4806]: E1125 15:33:02.925899 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:13 crc kubenswrapper[4806]: E1125 15:33:13.193219 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:18 crc kubenswrapper[4806]: I1125 15:33:18.934676 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:33:18 crc kubenswrapper[4806]: I1125 15:33:18.935398 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:33:18 crc kubenswrapper[4806]: I1125 15:33:18.935490 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:33:18 crc kubenswrapper[4806]: I1125 15:33:18.936620 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:33:18 crc kubenswrapper[4806]: I1125 15:33:18.936722 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" gracePeriod=600 Nov 25 15:33:19 crc kubenswrapper[4806]: E1125 15:33:19.087047 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:33:19 crc kubenswrapper[4806]: I1125 15:33:19.516904 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" exitCode=0 Nov 25 15:33:19 crc kubenswrapper[4806]: I1125 15:33:19.516962 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44"} Nov 25 15:33:19 crc kubenswrapper[4806]: I1125 15:33:19.517262 4806 scope.go:117] "RemoveContainer" containerID="1315d833b7ecfd3e5832ff41afdffceaf3dbae9c2727fcd8a0fb442fcbda555a" Nov 25 15:33:19 crc kubenswrapper[4806]: I1125 15:33:19.518144 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:33:19 crc kubenswrapper[4806]: E1125 15:33:19.518398 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:33:23 crc kubenswrapper[4806]: E1125 15:33:23.466447 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:31 crc kubenswrapper[4806]: I1125 15:33:31.090271 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:33:31 crc kubenswrapper[4806]: E1125 15:33:31.091491 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:33:33 crc kubenswrapper[4806]: E1125 15:33:33.751955 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:43 crc kubenswrapper[4806]: I1125 15:33:43.090206 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:33:43 crc kubenswrapper[4806]: E1125 15:33:43.090850 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:33:44 crc kubenswrapper[4806]: E1125 15:33:44.102568 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4de18d0_1ee6_4e6e_a3c5_5a44b4ee8a0b.slice/crio-27755a39d915afb1dfdaf6300404679e039d7c7826e558baf488c6d9239bf5fc.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:33:54 crc kubenswrapper[4806]: I1125 15:33:54.093633 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:33:54 crc kubenswrapper[4806]: E1125 15:33:54.094966 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:33:57 crc kubenswrapper[4806]: I1125 15:33:57.000495 4806 generic.go:334] "Generic (PLEG): container finished" podID="69414d23-6d19-459c-8930-73ad33dd73e5" containerID="391e8665574caa147b8f1f18e5e1ea0ae2e64351d6230d4d466cc33accb5dfba" exitCode=0 Nov 25 15:33:57 crc kubenswrapper[4806]: I1125 15:33:57.000568 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" event={"ID":"69414d23-6d19-459c-8930-73ad33dd73e5","Type":"ContainerDied","Data":"391e8665574caa147b8f1f18e5e1ea0ae2e64351d6230d4d466cc33accb5dfba"} Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.543295 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.636374 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0\") pod \"69414d23-6d19-459c-8930-73ad33dd73e5\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.636738 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory\") pod \"69414d23-6d19-459c-8930-73ad33dd73e5\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.636799 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key\") pod \"69414d23-6d19-459c-8930-73ad33dd73e5\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.636851 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw6p6\" (UniqueName: \"kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6\") pod \"69414d23-6d19-459c-8930-73ad33dd73e5\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.636923 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle\") pod \"69414d23-6d19-459c-8930-73ad33dd73e5\" (UID: \"69414d23-6d19-459c-8930-73ad33dd73e5\") " Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.642813 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6" (OuterVolumeSpecName: "kube-api-access-nw6p6") pod "69414d23-6d19-459c-8930-73ad33dd73e5" (UID: "69414d23-6d19-459c-8930-73ad33dd73e5"). InnerVolumeSpecName "kube-api-access-nw6p6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.643456 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "69414d23-6d19-459c-8930-73ad33dd73e5" (UID: "69414d23-6d19-459c-8930-73ad33dd73e5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.676582 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "69414d23-6d19-459c-8930-73ad33dd73e5" (UID: "69414d23-6d19-459c-8930-73ad33dd73e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.682620 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory" (OuterVolumeSpecName: "inventory") pod "69414d23-6d19-459c-8930-73ad33dd73e5" (UID: "69414d23-6d19-459c-8930-73ad33dd73e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.689480 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "69414d23-6d19-459c-8930-73ad33dd73e5" (UID: "69414d23-6d19-459c-8930-73ad33dd73e5"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.740108 4806 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/69414d23-6d19-459c-8930-73ad33dd73e5-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.740168 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.740184 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.740203 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw6p6\" (UniqueName: \"kubernetes.io/projected/69414d23-6d19-459c-8930-73ad33dd73e5-kube-api-access-nw6p6\") on node \"crc\" DevicePath \"\"" Nov 25 15:33:58 crc kubenswrapper[4806]: I1125 15:33:58.740227 4806 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69414d23-6d19-459c-8930-73ad33dd73e5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.072085 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" event={"ID":"69414d23-6d19-459c-8930-73ad33dd73e5","Type":"ContainerDied","Data":"5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850"} Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.072128 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d9bfc5df6e4e55cd72427c2294845b8fa26ae7cdb59975d487f9397c10c6850" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.072140 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-xjxhn" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.195847 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s"] Nov 25 15:33:59 crc kubenswrapper[4806]: E1125 15:33:59.196653 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69414d23-6d19-459c-8930-73ad33dd73e5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.196757 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="69414d23-6d19-459c-8930-73ad33dd73e5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.197151 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="69414d23-6d19-459c-8930-73ad33dd73e5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.198268 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.201095 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.201395 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.202360 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.202754 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.202846 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.203273 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.209961 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s"] Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361434 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361509 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361543 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361599 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361626 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhf4\" (UniqueName: \"kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.361666 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.463610 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.464165 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.464357 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.464499 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.464632 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.464757 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhf4\" (UniqueName: \"kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.470932 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.470962 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.471175 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.472283 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.473748 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.493800 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhf4\" (UniqueName: \"kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:33:59 crc kubenswrapper[4806]: I1125 15:33:59.518885 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:34:00 crc kubenswrapper[4806]: I1125 15:34:00.115725 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s"] Nov 25 15:34:01 crc kubenswrapper[4806]: I1125 15:34:01.126243 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" event={"ID":"5b01cee4-68ad-4117-9841-8dea2142524a","Type":"ContainerStarted","Data":"394c481dde71bfa116994b27ae1a4c8055cbd8ca371e501c67ac79a74f363e30"} Nov 25 15:34:01 crc kubenswrapper[4806]: I1125 15:34:01.126513 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" event={"ID":"5b01cee4-68ad-4117-9841-8dea2142524a","Type":"ContainerStarted","Data":"111667474ec620bcdd4193f57f8a5823e2a767283c33e4f14275a529095fa8a4"} Nov 25 15:34:01 crc kubenswrapper[4806]: I1125 15:34:01.165960 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" podStartSLOduration=1.567369969 podStartE2EDuration="2.165928892s" podCreationTimestamp="2025-11-25 15:33:59 +0000 UTC" firstStartedPulling="2025-11-25 15:34:00.111407045 +0000 UTC m=+2472.763549456" lastFinishedPulling="2025-11-25 15:34:00.709965978 +0000 UTC m=+2473.362108379" observedRunningTime="2025-11-25 15:34:01.151606317 +0000 UTC m=+2473.803748768" watchObservedRunningTime="2025-11-25 15:34:01.165928892 +0000 UTC m=+2473.818071343" Nov 25 15:34:06 crc kubenswrapper[4806]: I1125 15:34:06.089655 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:34:06 crc kubenswrapper[4806]: E1125 15:34:06.090431 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:34:20 crc kubenswrapper[4806]: I1125 15:34:20.091726 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:34:20 crc kubenswrapper[4806]: E1125 15:34:20.092703 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:34:33 crc kubenswrapper[4806]: I1125 15:34:33.089808 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:34:33 crc kubenswrapper[4806]: E1125 15:34:33.090838 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:34:45 crc kubenswrapper[4806]: I1125 15:34:45.090139 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:34:45 crc kubenswrapper[4806]: E1125 15:34:45.091255 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:34:51 crc kubenswrapper[4806]: I1125 15:34:51.666952 4806 generic.go:334] "Generic (PLEG): container finished" podID="5b01cee4-68ad-4117-9841-8dea2142524a" containerID="394c481dde71bfa116994b27ae1a4c8055cbd8ca371e501c67ac79a74f363e30" exitCode=0 Nov 25 15:34:51 crc kubenswrapper[4806]: I1125 15:34:51.667205 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" event={"ID":"5b01cee4-68ad-4117-9841-8dea2142524a","Type":"ContainerDied","Data":"394c481dde71bfa116994b27ae1a4c8055cbd8ca371e501c67ac79a74f363e30"} Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.287404 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.413720 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.414007 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.414145 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.414205 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwhf4\" (UniqueName: \"kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.414323 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.414402 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory\") pod \"5b01cee4-68ad-4117-9841-8dea2142524a\" (UID: \"5b01cee4-68ad-4117-9841-8dea2142524a\") " Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.420502 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.421259 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4" (OuterVolumeSpecName: "kube-api-access-kwhf4") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "kube-api-access-kwhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.443359 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.445734 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.446217 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.460065 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory" (OuterVolumeSpecName: "inventory") pod "5b01cee4-68ad-4117-9841-8dea2142524a" (UID: "5b01cee4-68ad-4117-9841-8dea2142524a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517176 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517234 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517253 4806 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517268 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517278 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwhf4\" (UniqueName: \"kubernetes.io/projected/5b01cee4-68ad-4117-9841-8dea2142524a-kube-api-access-kwhf4\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.517287 4806 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5b01cee4-68ad-4117-9841-8dea2142524a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.693031 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" event={"ID":"5b01cee4-68ad-4117-9841-8dea2142524a","Type":"ContainerDied","Data":"111667474ec620bcdd4193f57f8a5823e2a767283c33e4f14275a529095fa8a4"} Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.693072 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.693093 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="111667474ec620bcdd4193f57f8a5823e2a767283c33e4f14275a529095fa8a4" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.787210 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk"] Nov 25 15:34:53 crc kubenswrapper[4806]: E1125 15:34:53.787704 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b01cee4-68ad-4117-9841-8dea2142524a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.787725 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b01cee4-68ad-4117-9841-8dea2142524a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.787936 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b01cee4-68ad-4117-9841-8dea2142524a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.788709 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.790700 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.796630 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.796741 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.797052 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.797188 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.806930 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk"] Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.929287 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.929495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ld58\" (UniqueName: \"kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.929589 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.929628 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:53 crc kubenswrapper[4806]: I1125 15:34:53.929687 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.031937 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ld58\" (UniqueName: \"kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.032055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.032084 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.032116 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.032172 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.040274 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.040332 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.040461 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.040684 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.065075 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ld58\" (UniqueName: \"kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gdntk\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.105960 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.664131 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk"] Nov 25 15:34:54 crc kubenswrapper[4806]: I1125 15:34:54.703850 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" event={"ID":"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47","Type":"ContainerStarted","Data":"6fc94b7b707bce54638d8234c7beb4f6ab4461a69ced85776aac700abc11c65f"} Nov 25 15:34:55 crc kubenswrapper[4806]: I1125 15:34:55.715402 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" event={"ID":"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47","Type":"ContainerStarted","Data":"a9d159ea82231c71fff72927fd93e13fcad890c4265aec7045e0e49164dae3cc"} Nov 25 15:34:55 crc kubenswrapper[4806]: I1125 15:34:55.736882 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" podStartSLOduration=2.225957436 podStartE2EDuration="2.736857153s" podCreationTimestamp="2025-11-25 15:34:53 +0000 UTC" firstStartedPulling="2025-11-25 15:34:54.670189915 +0000 UTC m=+2527.322332326" lastFinishedPulling="2025-11-25 15:34:55.181089632 +0000 UTC m=+2527.833232043" observedRunningTime="2025-11-25 15:34:55.729586182 +0000 UTC m=+2528.381728613" watchObservedRunningTime="2025-11-25 15:34:55.736857153 +0000 UTC m=+2528.388999554" Nov 25 15:34:57 crc kubenswrapper[4806]: I1125 15:34:57.089851 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:34:57 crc kubenswrapper[4806]: E1125 15:34:57.090405 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:35:08 crc kubenswrapper[4806]: I1125 15:35:08.100902 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:35:08 crc kubenswrapper[4806]: E1125 15:35:08.101724 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:35:20 crc kubenswrapper[4806]: I1125 15:35:20.089163 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:35:20 crc kubenswrapper[4806]: E1125 15:35:20.089895 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:35:32 crc kubenswrapper[4806]: I1125 15:35:32.089521 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:35:32 crc kubenswrapper[4806]: E1125 15:35:32.090205 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:35:45 crc kubenswrapper[4806]: I1125 15:35:45.090188 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:35:45 crc kubenswrapper[4806]: E1125 15:35:45.090843 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:35:57 crc kubenswrapper[4806]: I1125 15:35:57.088999 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:35:57 crc kubenswrapper[4806]: E1125 15:35:57.089809 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:36:11 crc kubenswrapper[4806]: I1125 15:36:11.089886 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:36:11 crc kubenswrapper[4806]: E1125 15:36:11.090627 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:36:24 crc kubenswrapper[4806]: I1125 15:36:24.090685 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:36:24 crc kubenswrapper[4806]: E1125 15:36:24.091355 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:36:37 crc kubenswrapper[4806]: I1125 15:36:37.089029 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:36:37 crc kubenswrapper[4806]: E1125 15:36:37.090551 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:36:48 crc kubenswrapper[4806]: I1125 15:36:48.096143 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:36:48 crc kubenswrapper[4806]: E1125 15:36:48.096855 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:37:01 crc kubenswrapper[4806]: I1125 15:37:01.090282 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:37:01 crc kubenswrapper[4806]: E1125 15:37:01.091251 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:37:14 crc kubenswrapper[4806]: I1125 15:37:14.089739 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:37:14 crc kubenswrapper[4806]: E1125 15:37:14.090446 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:37:27 crc kubenswrapper[4806]: I1125 15:37:27.090231 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:37:27 crc kubenswrapper[4806]: E1125 15:37:27.091020 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:37:40 crc kubenswrapper[4806]: I1125 15:37:40.089888 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:37:40 crc kubenswrapper[4806]: E1125 15:37:40.091243 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:37:51 crc kubenswrapper[4806]: I1125 15:37:51.089103 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:37:51 crc kubenswrapper[4806]: E1125 15:37:51.090013 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:38:02 crc kubenswrapper[4806]: I1125 15:38:02.089424 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:38:02 crc kubenswrapper[4806]: E1125 15:38:02.090175 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:38:17 crc kubenswrapper[4806]: I1125 15:38:17.090632 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:38:17 crc kubenswrapper[4806]: E1125 15:38:17.091518 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:38:28 crc kubenswrapper[4806]: I1125 15:38:28.935532 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:28 crc kubenswrapper[4806]: I1125 15:38:28.940023 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:28 crc kubenswrapper[4806]: I1125 15:38:28.958198 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.030276 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.030385 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.030412 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d27ff\" (UniqueName: \"kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.089131 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.132908 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.132992 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d27ff\" (UniqueName: \"kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.133282 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.134252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.135292 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.160396 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d27ff\" (UniqueName: \"kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff\") pod \"redhat-operators-l7fwt\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.265112 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:29 crc kubenswrapper[4806]: I1125 15:38:29.790356 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:29 crc kubenswrapper[4806]: W1125 15:38:29.793246 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7864d60_0f2e_497f_a0b1_3bbf33e0471e.slice/crio-7c4a4706b8c1fa5c2b1197c4e578d24491d26094c8342859a3e39a1040a319be WatchSource:0}: Error finding container 7c4a4706b8c1fa5c2b1197c4e578d24491d26094c8342859a3e39a1040a319be: Status 404 returned error can't find the container with id 7c4a4706b8c1fa5c2b1197c4e578d24491d26094c8342859a3e39a1040a319be Nov 25 15:38:30 crc kubenswrapper[4806]: I1125 15:38:30.491865 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerID="533d3b87573c7cf45638827fa590cd3ea3e1fb9036ca5eb4661b17a1fd207f87" exitCode=0 Nov 25 15:38:30 crc kubenswrapper[4806]: I1125 15:38:30.491912 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerDied","Data":"533d3b87573c7cf45638827fa590cd3ea3e1fb9036ca5eb4661b17a1fd207f87"} Nov 25 15:38:30 crc kubenswrapper[4806]: I1125 15:38:30.492453 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerStarted","Data":"7c4a4706b8c1fa5c2b1197c4e578d24491d26094c8342859a3e39a1040a319be"} Nov 25 15:38:30 crc kubenswrapper[4806]: I1125 15:38:30.494687 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734"} Nov 25 15:38:30 crc kubenswrapper[4806]: I1125 15:38:30.495064 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:38:32 crc kubenswrapper[4806]: I1125 15:38:32.516741 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerStarted","Data":"4105986578ececa51613ee92c279fd11230da804f611167c27cebd243d899588"} Nov 25 15:38:34 crc kubenswrapper[4806]: I1125 15:38:34.552083 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerID="4105986578ececa51613ee92c279fd11230da804f611167c27cebd243d899588" exitCode=0 Nov 25 15:38:34 crc kubenswrapper[4806]: I1125 15:38:34.552149 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerDied","Data":"4105986578ececa51613ee92c279fd11230da804f611167c27cebd243d899588"} Nov 25 15:38:35 crc kubenswrapper[4806]: I1125 15:38:35.565462 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerStarted","Data":"aeaca0a3f421e822614863a6a42e5ae66dff8f6e9c8d8a42db2fb0254ee3c326"} Nov 25 15:38:35 crc kubenswrapper[4806]: I1125 15:38:35.586833 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l7fwt" podStartSLOduration=2.939811338 podStartE2EDuration="7.586812284s" podCreationTimestamp="2025-11-25 15:38:28 +0000 UTC" firstStartedPulling="2025-11-25 15:38:30.494860714 +0000 UTC m=+2743.147003125" lastFinishedPulling="2025-11-25 15:38:35.14186166 +0000 UTC m=+2747.794004071" observedRunningTime="2025-11-25 15:38:35.583379294 +0000 UTC m=+2748.235521705" watchObservedRunningTime="2025-11-25 15:38:35.586812284 +0000 UTC m=+2748.238954695" Nov 25 15:38:39 crc kubenswrapper[4806]: I1125 15:38:39.265487 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:39 crc kubenswrapper[4806]: I1125 15:38:39.266047 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:40 crc kubenswrapper[4806]: I1125 15:38:40.328930 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l7fwt" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="registry-server" probeResult="failure" output=< Nov 25 15:38:40 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:38:40 crc kubenswrapper[4806]: > Nov 25 15:38:49 crc kubenswrapper[4806]: I1125 15:38:49.329548 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:49 crc kubenswrapper[4806]: I1125 15:38:49.387935 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:49 crc kubenswrapper[4806]: I1125 15:38:49.568662 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:50 crc kubenswrapper[4806]: I1125 15:38:50.740017 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l7fwt" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="registry-server" containerID="cri-o://aeaca0a3f421e822614863a6a42e5ae66dff8f6e9c8d8a42db2fb0254ee3c326" gracePeriod=2 Nov 25 15:38:51 crc kubenswrapper[4806]: I1125 15:38:51.752019 4806 generic.go:334] "Generic (PLEG): container finished" podID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerID="aeaca0a3f421e822614863a6a42e5ae66dff8f6e9c8d8a42db2fb0254ee3c326" exitCode=0 Nov 25 15:38:51 crc kubenswrapper[4806]: I1125 15:38:51.752565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerDied","Data":"aeaca0a3f421e822614863a6a42e5ae66dff8f6e9c8d8a42db2fb0254ee3c326"} Nov 25 15:38:51 crc kubenswrapper[4806]: I1125 15:38:51.946030 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.060735 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content\") pod \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.060795 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities\") pod \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.060978 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d27ff\" (UniqueName: \"kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff\") pod \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\" (UID: \"d7864d60-0f2e-497f-a0b1-3bbf33e0471e\") " Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.061707 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities" (OuterVolumeSpecName: "utilities") pod "d7864d60-0f2e-497f-a0b1-3bbf33e0471e" (UID: "d7864d60-0f2e-497f-a0b1-3bbf33e0471e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.066964 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff" (OuterVolumeSpecName: "kube-api-access-d27ff") pod "d7864d60-0f2e-497f-a0b1-3bbf33e0471e" (UID: "d7864d60-0f2e-497f-a0b1-3bbf33e0471e"). InnerVolumeSpecName "kube-api-access-d27ff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.163192 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.163238 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d27ff\" (UniqueName: \"kubernetes.io/projected/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-kube-api-access-d27ff\") on node \"crc\" DevicePath \"\"" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.164921 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7864d60-0f2e-497f-a0b1-3bbf33e0471e" (UID: "d7864d60-0f2e-497f-a0b1-3bbf33e0471e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.265262 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7864d60-0f2e-497f-a0b1-3bbf33e0471e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.763198 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l7fwt" event={"ID":"d7864d60-0f2e-497f-a0b1-3bbf33e0471e","Type":"ContainerDied","Data":"7c4a4706b8c1fa5c2b1197c4e578d24491d26094c8342859a3e39a1040a319be"} Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.764695 4806 scope.go:117] "RemoveContainer" containerID="aeaca0a3f421e822614863a6a42e5ae66dff8f6e9c8d8a42db2fb0254ee3c326" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.764620 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l7fwt" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.787966 4806 scope.go:117] "RemoveContainer" containerID="4105986578ececa51613ee92c279fd11230da804f611167c27cebd243d899588" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.814699 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.821790 4806 scope.go:117] "RemoveContainer" containerID="533d3b87573c7cf45638827fa590cd3ea3e1fb9036ca5eb4661b17a1fd207f87" Nov 25 15:38:52 crc kubenswrapper[4806]: I1125 15:38:52.829057 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l7fwt"] Nov 25 15:38:54 crc kubenswrapper[4806]: I1125 15:38:54.100072 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" path="/var/lib/kubelet/pods/d7864d60-0f2e-497f-a0b1-3bbf33e0471e/volumes" Nov 25 15:39:16 crc kubenswrapper[4806]: I1125 15:39:16.016481 4806 generic.go:334] "Generic (PLEG): container finished" podID="63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" containerID="a9d159ea82231c71fff72927fd93e13fcad890c4265aec7045e0e49164dae3cc" exitCode=0 Nov 25 15:39:16 crc kubenswrapper[4806]: I1125 15:39:16.017036 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" event={"ID":"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47","Type":"ContainerDied","Data":"a9d159ea82231c71fff72927fd93e13fcad890c4265aec7045e0e49164dae3cc"} Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.636722 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.780851 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle\") pod \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.780960 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key\") pod \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.781019 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory\") pod \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.781100 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ld58\" (UniqueName: \"kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58\") pod \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.781237 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0\") pod \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\" (UID: \"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47\") " Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.789070 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" (UID: "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.792715 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58" (OuterVolumeSpecName: "kube-api-access-5ld58") pod "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" (UID: "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47"). InnerVolumeSpecName "kube-api-access-5ld58". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.810628 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory" (OuterVolumeSpecName: "inventory") pod "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" (UID: "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.812740 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" (UID: "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.813033 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" (UID: "63e0c8ca-cbfc-476a-b68a-00b39c2a7a47"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.884273 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.884342 4806 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.884356 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.884364 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:17 crc kubenswrapper[4806]: I1125 15:39:17.884374 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ld58\" (UniqueName: \"kubernetes.io/projected/63e0c8ca-cbfc-476a-b68a-00b39c2a7a47-kube-api-access-5ld58\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.035216 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" event={"ID":"63e0c8ca-cbfc-476a-b68a-00b39c2a7a47","Type":"ContainerDied","Data":"6fc94b7b707bce54638d8234c7beb4f6ab4461a69ced85776aac700abc11c65f"} Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.035254 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fc94b7b707bce54638d8234c7beb4f6ab4461a69ced85776aac700abc11c65f" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.035336 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gdntk" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.126584 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r"] Nov 25 15:39:18 crc kubenswrapper[4806]: E1125 15:39:18.127034 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="extract-utilities" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127052 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="extract-utilities" Nov 25 15:39:18 crc kubenswrapper[4806]: E1125 15:39:18.127063 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127072 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:18 crc kubenswrapper[4806]: E1125 15:39:18.127103 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="registry-server" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127110 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="registry-server" Nov 25 15:39:18 crc kubenswrapper[4806]: E1125 15:39:18.127121 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="extract-content" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127127 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="extract-content" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127371 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7864d60-0f2e-497f-a0b1-3bbf33e0471e" containerName="registry-server" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.127389 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e0c8ca-cbfc-476a-b68a-00b39c2a7a47" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.128166 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.130430 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.130768 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.130851 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.131379 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.132641 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.134128 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.138453 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.148051 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r"] Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.190340 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.190393 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.190495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.190784 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j2l2\" (UniqueName: \"kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.190930 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.191086 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.191142 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.191173 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.191219 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293125 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293169 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293272 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293374 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j2l2\" (UniqueName: \"kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293413 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293466 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293495 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293521 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.293549 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.294137 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.297307 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.297878 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.298115 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.298617 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.298802 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.298957 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.304022 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.310580 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j2l2\" (UniqueName: \"kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-qvk7r\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:18 crc kubenswrapper[4806]: I1125 15:39:18.445553 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:39:19 crc kubenswrapper[4806]: I1125 15:39:19.095336 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r"] Nov 25 15:39:20 crc kubenswrapper[4806]: I1125 15:39:20.052299 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" event={"ID":"dc945807-33cb-4f78-9fed-c65adc25aeef","Type":"ContainerStarted","Data":"ee2336e1e855224f491c34e0a3b53fcec5cc7e62be285c607bd0630272ab7493"} Nov 25 15:39:21 crc kubenswrapper[4806]: I1125 15:39:21.064541 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" event={"ID":"dc945807-33cb-4f78-9fed-c65adc25aeef","Type":"ContainerStarted","Data":"36867b1046e28d098b3eb1b95304ba190d1e143e58d8061ed61427d430114761"} Nov 25 15:39:21 crc kubenswrapper[4806]: I1125 15:39:21.083914 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" podStartSLOduration=1.865257266 podStartE2EDuration="3.083885225s" podCreationTimestamp="2025-11-25 15:39:18 +0000 UTC" firstStartedPulling="2025-11-25 15:39:19.071246571 +0000 UTC m=+2791.723388982" lastFinishedPulling="2025-11-25 15:39:20.28987453 +0000 UTC m=+2792.942016941" observedRunningTime="2025-11-25 15:39:21.079776226 +0000 UTC m=+2793.731918677" watchObservedRunningTime="2025-11-25 15:39:21.083885225 +0000 UTC m=+2793.736027646" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.078917 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.085386 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.148376 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.178052 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.178241 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6s77\" (UniqueName: \"kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.178339 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.280055 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.280192 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.280271 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6s77\" (UniqueName: \"kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.280620 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.280667 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.305215 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6s77\" (UniqueName: \"kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77\") pod \"certified-operators-w2g9c\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.439331 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.935111 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.935614 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:40:48 crc kubenswrapper[4806]: I1125 15:40:48.996810 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:40:49 crc kubenswrapper[4806]: I1125 15:40:49.087160 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerStarted","Data":"092b48008f9953a2becab1812431ceaa74f638bf66249a5811820dd0db110574"} Nov 25 15:40:50 crc kubenswrapper[4806]: I1125 15:40:50.101624 4806 generic.go:334] "Generic (PLEG): container finished" podID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerID="c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e" exitCode=0 Nov 25 15:40:50 crc kubenswrapper[4806]: I1125 15:40:50.104733 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerDied","Data":"c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e"} Nov 25 15:40:51 crc kubenswrapper[4806]: I1125 15:40:51.111692 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerStarted","Data":"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b"} Nov 25 15:40:53 crc kubenswrapper[4806]: I1125 15:40:53.160820 4806 generic.go:334] "Generic (PLEG): container finished" podID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerID="3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b" exitCode=0 Nov 25 15:40:53 crc kubenswrapper[4806]: I1125 15:40:53.160973 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerDied","Data":"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b"} Nov 25 15:40:55 crc kubenswrapper[4806]: I1125 15:40:55.186093 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerStarted","Data":"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd"} Nov 25 15:40:55 crc kubenswrapper[4806]: I1125 15:40:55.210151 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w2g9c" podStartSLOduration=3.102886024 podStartE2EDuration="7.210130023s" podCreationTimestamp="2025-11-25 15:40:48 +0000 UTC" firstStartedPulling="2025-11-25 15:40:50.103713104 +0000 UTC m=+2882.755855525" lastFinishedPulling="2025-11-25 15:40:54.210957103 +0000 UTC m=+2886.863099524" observedRunningTime="2025-11-25 15:40:55.201282266 +0000 UTC m=+2887.853424697" watchObservedRunningTime="2025-11-25 15:40:55.210130023 +0000 UTC m=+2887.862272434" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.001987 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.007346 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.017122 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.187256 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9z2d\" (UniqueName: \"kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.187698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.187834 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.289859 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9z2d\" (UniqueName: \"kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.291150 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.291249 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.291816 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.291867 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.314940 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9z2d\" (UniqueName: \"kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d\") pod \"redhat-marketplace-tfx9b\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.341806 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:40:57 crc kubenswrapper[4806]: I1125 15:40:57.834576 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:40:57 crc kubenswrapper[4806]: W1125 15:40:57.841611 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod925c2715_cba4_479e_96d2_3de09a9bd1c9.slice/crio-7e8423e9cf599e0c92c56f6707bef114f455f30e09d571f6216334ad1b66a7db WatchSource:0}: Error finding container 7e8423e9cf599e0c92c56f6707bef114f455f30e09d571f6216334ad1b66a7db: Status 404 returned error can't find the container with id 7e8423e9cf599e0c92c56f6707bef114f455f30e09d571f6216334ad1b66a7db Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.213101 4806 generic.go:334] "Generic (PLEG): container finished" podID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerID="e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0" exitCode=0 Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.213248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerDied","Data":"e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0"} Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.213415 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerStarted","Data":"7e8423e9cf599e0c92c56f6707bef114f455f30e09d571f6216334ad1b66a7db"} Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.439791 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.439845 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:58 crc kubenswrapper[4806]: I1125 15:40:58.502881 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:40:59 crc kubenswrapper[4806]: I1125 15:40:59.291669 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:41:00 crc kubenswrapper[4806]: I1125 15:41:00.238543 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerStarted","Data":"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347"} Nov 25 15:41:00 crc kubenswrapper[4806]: I1125 15:41:00.757108 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.249925 4806 generic.go:334] "Generic (PLEG): container finished" podID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerID="0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347" exitCode=0 Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.250033 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerDied","Data":"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347"} Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.250125 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w2g9c" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="registry-server" containerID="cri-o://45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd" gracePeriod=2 Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.884738 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.913171 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6s77\" (UniqueName: \"kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77\") pod \"0090cf81-c728-41b1-ac33-0b7e871ba582\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.913224 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities\") pod \"0090cf81-c728-41b1-ac33-0b7e871ba582\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.913245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content\") pod \"0090cf81-c728-41b1-ac33-0b7e871ba582\" (UID: \"0090cf81-c728-41b1-ac33-0b7e871ba582\") " Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.914763 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities" (OuterVolumeSpecName: "utilities") pod "0090cf81-c728-41b1-ac33-0b7e871ba582" (UID: "0090cf81-c728-41b1-ac33-0b7e871ba582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.919757 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77" (OuterVolumeSpecName: "kube-api-access-p6s77") pod "0090cf81-c728-41b1-ac33-0b7e871ba582" (UID: "0090cf81-c728-41b1-ac33-0b7e871ba582"). InnerVolumeSpecName "kube-api-access-p6s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:41:01 crc kubenswrapper[4806]: I1125 15:41:01.976751 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0090cf81-c728-41b1-ac33-0b7e871ba582" (UID: "0090cf81-c728-41b1-ac33-0b7e871ba582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.015958 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6s77\" (UniqueName: \"kubernetes.io/projected/0090cf81-c728-41b1-ac33-0b7e871ba582-kube-api-access-p6s77\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.015990 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.016000 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0090cf81-c728-41b1-ac33-0b7e871ba582-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.260959 4806 generic.go:334] "Generic (PLEG): container finished" podID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerID="45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd" exitCode=0 Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.261022 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerDied","Data":"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd"} Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.261050 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w2g9c" event={"ID":"0090cf81-c728-41b1-ac33-0b7e871ba582","Type":"ContainerDied","Data":"092b48008f9953a2becab1812431ceaa74f638bf66249a5811820dd0db110574"} Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.261056 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w2g9c" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.261069 4806 scope.go:117] "RemoveContainer" containerID="45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.264108 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerStarted","Data":"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1"} Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.281752 4806 scope.go:117] "RemoveContainer" containerID="3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.286503 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.294403 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w2g9c"] Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.300572 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tfx9b" podStartSLOduration=2.832829858 podStartE2EDuration="6.300545356s" podCreationTimestamp="2025-11-25 15:40:56 +0000 UTC" firstStartedPulling="2025-11-25 15:40:58.215752808 +0000 UTC m=+2890.867895219" lastFinishedPulling="2025-11-25 15:41:01.683468306 +0000 UTC m=+2894.335610717" observedRunningTime="2025-11-25 15:41:02.300052402 +0000 UTC m=+2894.952194813" watchObservedRunningTime="2025-11-25 15:41:02.300545356 +0000 UTC m=+2894.952687777" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.312730 4806 scope.go:117] "RemoveContainer" containerID="c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.366907 4806 scope.go:117] "RemoveContainer" containerID="45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd" Nov 25 15:41:02 crc kubenswrapper[4806]: E1125 15:41:02.367466 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd\": container with ID starting with 45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd not found: ID does not exist" containerID="45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.367534 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd"} err="failed to get container status \"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd\": rpc error: code = NotFound desc = could not find container \"45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd\": container with ID starting with 45557a2e92d8ced199685bc1bbd50023bca8bbb1a50f738897c3ff8810d6f8dd not found: ID does not exist" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.367569 4806 scope.go:117] "RemoveContainer" containerID="3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b" Nov 25 15:41:02 crc kubenswrapper[4806]: E1125 15:41:02.367963 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b\": container with ID starting with 3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b not found: ID does not exist" containerID="3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.367987 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b"} err="failed to get container status \"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b\": rpc error: code = NotFound desc = could not find container \"3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b\": container with ID starting with 3a635965d91ad427c6a1d82526c0cff88eeda5d20b5cd25ed41fcbb203b9747b not found: ID does not exist" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.368001 4806 scope.go:117] "RemoveContainer" containerID="c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e" Nov 25 15:41:02 crc kubenswrapper[4806]: E1125 15:41:02.368259 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e\": container with ID starting with c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e not found: ID does not exist" containerID="c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e" Nov 25 15:41:02 crc kubenswrapper[4806]: I1125 15:41:02.368281 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e"} err="failed to get container status \"c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e\": rpc error: code = NotFound desc = could not find container \"c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e\": container with ID starting with c559d88af4250bcfb89540d412482adb8f3cfe75c8c6493190546a783ddd8c7e not found: ID does not exist" Nov 25 15:41:04 crc kubenswrapper[4806]: I1125 15:41:04.101443 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" path="/var/lib/kubelet/pods/0090cf81-c728-41b1-ac33-0b7e871ba582/volumes" Nov 25 15:41:07 crc kubenswrapper[4806]: I1125 15:41:07.342294 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:07 crc kubenswrapper[4806]: I1125 15:41:07.342798 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:07 crc kubenswrapper[4806]: I1125 15:41:07.417287 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:08 crc kubenswrapper[4806]: I1125 15:41:08.394486 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:08 crc kubenswrapper[4806]: I1125 15:41:08.461826 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:41:10 crc kubenswrapper[4806]: I1125 15:41:10.351436 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tfx9b" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="registry-server" containerID="cri-o://595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1" gracePeriod=2 Nov 25 15:41:10 crc kubenswrapper[4806]: I1125 15:41:10.892886 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.095923 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9z2d\" (UniqueName: \"kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d\") pod \"925c2715-cba4-479e-96d2-3de09a9bd1c9\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.096077 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content\") pod \"925c2715-cba4-479e-96d2-3de09a9bd1c9\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.096837 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities\") pod \"925c2715-cba4-479e-96d2-3de09a9bd1c9\" (UID: \"925c2715-cba4-479e-96d2-3de09a9bd1c9\") " Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.097775 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities" (OuterVolumeSpecName: "utilities") pod "925c2715-cba4-479e-96d2-3de09a9bd1c9" (UID: "925c2715-cba4-479e-96d2-3de09a9bd1c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.106722 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d" (OuterVolumeSpecName: "kube-api-access-p9z2d") pod "925c2715-cba4-479e-96d2-3de09a9bd1c9" (UID: "925c2715-cba4-479e-96d2-3de09a9bd1c9"). InnerVolumeSpecName "kube-api-access-p9z2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.114041 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "925c2715-cba4-479e-96d2-3de09a9bd1c9" (UID: "925c2715-cba4-479e-96d2-3de09a9bd1c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.199370 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9z2d\" (UniqueName: \"kubernetes.io/projected/925c2715-cba4-479e-96d2-3de09a9bd1c9-kube-api-access-p9z2d\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.199423 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.199441 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/925c2715-cba4-479e-96d2-3de09a9bd1c9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.367198 4806 generic.go:334] "Generic (PLEG): container finished" podID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerID="595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1" exitCode=0 Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.367248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerDied","Data":"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1"} Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.367279 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tfx9b" event={"ID":"925c2715-cba4-479e-96d2-3de09a9bd1c9","Type":"ContainerDied","Data":"7e8423e9cf599e0c92c56f6707bef114f455f30e09d571f6216334ad1b66a7db"} Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.367297 4806 scope.go:117] "RemoveContainer" containerID="595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.367495 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tfx9b" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.396949 4806 scope.go:117] "RemoveContainer" containerID="0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.401216 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.420569 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tfx9b"] Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.438254 4806 scope.go:117] "RemoveContainer" containerID="e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.480810 4806 scope.go:117] "RemoveContainer" containerID="595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1" Nov 25 15:41:11 crc kubenswrapper[4806]: E1125 15:41:11.482102 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1\": container with ID starting with 595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1 not found: ID does not exist" containerID="595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.482137 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1"} err="failed to get container status \"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1\": rpc error: code = NotFound desc = could not find container \"595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1\": container with ID starting with 595ea62232cb6c16006f118cfa326183e0eb09241793ee3a784d71de11e1a2a1 not found: ID does not exist" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.482158 4806 scope.go:117] "RemoveContainer" containerID="0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347" Nov 25 15:41:11 crc kubenswrapper[4806]: E1125 15:41:11.482655 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347\": container with ID starting with 0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347 not found: ID does not exist" containerID="0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.482691 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347"} err="failed to get container status \"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347\": rpc error: code = NotFound desc = could not find container \"0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347\": container with ID starting with 0f72f0e2a62fdc5bafed85bbb3b4dd8deeb16d2768a0820c2eb65ef443426347 not found: ID does not exist" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.482711 4806 scope.go:117] "RemoveContainer" containerID="e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0" Nov 25 15:41:11 crc kubenswrapper[4806]: E1125 15:41:11.483060 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0\": container with ID starting with e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0 not found: ID does not exist" containerID="e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0" Nov 25 15:41:11 crc kubenswrapper[4806]: I1125 15:41:11.483080 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0"} err="failed to get container status \"e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0\": rpc error: code = NotFound desc = could not find container \"e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0\": container with ID starting with e2797259f490ce174d3afe615d167cbace0d920cb536ef029ad6aba37e5cdbf0 not found: ID does not exist" Nov 25 15:41:12 crc kubenswrapper[4806]: I1125 15:41:12.101644 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" path="/var/lib/kubelet/pods/925c2715-cba4-479e-96d2-3de09a9bd1c9/volumes" Nov 25 15:41:18 crc kubenswrapper[4806]: I1125 15:41:18.936094 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:41:18 crc kubenswrapper[4806]: I1125 15:41:18.936722 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:41:48 crc kubenswrapper[4806]: I1125 15:41:48.935005 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:41:48 crc kubenswrapper[4806]: I1125 15:41:48.935967 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:41:48 crc kubenswrapper[4806]: I1125 15:41:48.936580 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:41:48 crc kubenswrapper[4806]: I1125 15:41:48.939547 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:41:48 crc kubenswrapper[4806]: I1125 15:41:48.939696 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734" gracePeriod=600 Nov 25 15:41:49 crc kubenswrapper[4806]: I1125 15:41:49.789926 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734" exitCode=0 Nov 25 15:41:49 crc kubenswrapper[4806]: I1125 15:41:49.790000 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734"} Nov 25 15:41:49 crc kubenswrapper[4806]: I1125 15:41:49.790587 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5"} Nov 25 15:41:49 crc kubenswrapper[4806]: I1125 15:41:49.790612 4806 scope.go:117] "RemoveContainer" containerID="20ed65ea27bdbc3843bf7c80ddc4dc5177e737e42cad142718c0a7ddba113d44" Nov 25 15:42:03 crc kubenswrapper[4806]: I1125 15:42:03.939530 4806 generic.go:334] "Generic (PLEG): container finished" podID="dc945807-33cb-4f78-9fed-c65adc25aeef" containerID="36867b1046e28d098b3eb1b95304ba190d1e143e58d8061ed61427d430114761" exitCode=0 Nov 25 15:42:03 crc kubenswrapper[4806]: I1125 15:42:03.939767 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" event={"ID":"dc945807-33cb-4f78-9fed-c65adc25aeef","Type":"ContainerDied","Data":"36867b1046e28d098b3eb1b95304ba190d1e143e58d8061ed61427d430114761"} Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.533228 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557061 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557245 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557292 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557349 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557426 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557521 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557548 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557577 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.557613 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j2l2\" (UniqueName: \"kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2\") pod \"dc945807-33cb-4f78-9fed-c65adc25aeef\" (UID: \"dc945807-33cb-4f78-9fed-c65adc25aeef\") " Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.583027 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.589618 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2" (OuterVolumeSpecName: "kube-api-access-4j2l2") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "kube-api-access-4j2l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.605745 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.614553 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.623839 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.628602 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.630459 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.630514 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.635434 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory" (OuterVolumeSpecName: "inventory") pod "dc945807-33cb-4f78-9fed-c65adc25aeef" (UID: "dc945807-33cb-4f78-9fed-c65adc25aeef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660407 4806 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660440 4806 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660452 4806 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660464 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660473 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4j2l2\" (UniqueName: \"kubernetes.io/projected/dc945807-33cb-4f78-9fed-c65adc25aeef-kube-api-access-4j2l2\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660482 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660490 4806 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660497 4806 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.660505 4806 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/dc945807-33cb-4f78-9fed-c65adc25aeef-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.979911 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" event={"ID":"dc945807-33cb-4f78-9fed-c65adc25aeef","Type":"ContainerDied","Data":"ee2336e1e855224f491c34e0a3b53fcec5cc7e62be285c607bd0630272ab7493"} Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.979950 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee2336e1e855224f491c34e0a3b53fcec5cc7e62be285c607bd0630272ab7493" Nov 25 15:42:05 crc kubenswrapper[4806]: I1125 15:42:05.979953 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-qvk7r" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082179 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4"] Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082710 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="extract-utilities" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082731 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="extract-utilities" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082746 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="extract-content" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082753 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="extract-content" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082768 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc945807-33cb-4f78-9fed-c65adc25aeef" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082776 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc945807-33cb-4f78-9fed-c65adc25aeef" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082789 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082795 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082833 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="extract-content" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082840 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="extract-content" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082903 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="extract-utilities" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082910 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="extract-utilities" Nov 25 15:42:06 crc kubenswrapper[4806]: E1125 15:42:06.082921 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.082927 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.083218 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc945807-33cb-4f78-9fed-c65adc25aeef" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.083231 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0090cf81-c728-41b1-ac33-0b7e871ba582" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.083242 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="925c2715-cba4-479e-96d2-3de09a9bd1c9" containerName="registry-server" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.084101 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.089361 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-r8q8k" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.089360 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.089490 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.089516 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.089897 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.102820 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4"] Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170154 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170238 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170286 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170349 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170486 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv89z\" (UniqueName: \"kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170554 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.170698 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.272887 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.272989 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.273106 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.273126 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.273151 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.273174 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.273211 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv89z\" (UniqueName: \"kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.279238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.280720 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.280727 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.281609 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.283089 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.286508 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.290087 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv89z\" (UniqueName: \"kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:06 crc kubenswrapper[4806]: I1125 15:42:06.409521 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:42:07 crc kubenswrapper[4806]: I1125 15:42:07.067650 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4"] Nov 25 15:42:08 crc kubenswrapper[4806]: I1125 15:42:08.008524 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" event={"ID":"6e3bb0ce-18a1-49d0-aff6-4d45985913a6","Type":"ContainerStarted","Data":"7a1f0ee476294b4cd76b0c88561e1ecad7ae9584312324827869b06eba951792"} Nov 25 15:42:08 crc kubenswrapper[4806]: I1125 15:42:08.009126 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" event={"ID":"6e3bb0ce-18a1-49d0-aff6-4d45985913a6","Type":"ContainerStarted","Data":"4cb1e158807495c98d09b354ed755174c2e940e968ec4d6210b2ac8512a86be1"} Nov 25 15:42:08 crc kubenswrapper[4806]: I1125 15:42:08.032998 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" podStartSLOduration=1.57196916 podStartE2EDuration="2.032973989s" podCreationTimestamp="2025-11-25 15:42:06 +0000 UTC" firstStartedPulling="2025-11-25 15:42:07.082553167 +0000 UTC m=+2959.734695578" lastFinishedPulling="2025-11-25 15:42:07.543557986 +0000 UTC m=+2960.195700407" observedRunningTime="2025-11-25 15:42:08.025121364 +0000 UTC m=+2960.677263795" watchObservedRunningTime="2025-11-25 15:42:08.032973989 +0000 UTC m=+2960.685116400" Nov 25 15:44:18 crc kubenswrapper[4806]: I1125 15:44:18.934898 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:44:18 crc kubenswrapper[4806]: I1125 15:44:18.935523 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:44:27 crc kubenswrapper[4806]: I1125 15:44:27.936994 4806 generic.go:334] "Generic (PLEG): container finished" podID="6e3bb0ce-18a1-49d0-aff6-4d45985913a6" containerID="7a1f0ee476294b4cd76b0c88561e1ecad7ae9584312324827869b06eba951792" exitCode=0 Nov 25 15:44:27 crc kubenswrapper[4806]: I1125 15:44:27.937158 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" event={"ID":"6e3bb0ce-18a1-49d0-aff6-4d45985913a6","Type":"ContainerDied","Data":"7a1f0ee476294b4cd76b0c88561e1ecad7ae9584312324827869b06eba951792"} Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.504284 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689670 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689720 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689810 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689838 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689908 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689944 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.689979 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv89z\" (UniqueName: \"kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z\") pod \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\" (UID: \"6e3bb0ce-18a1-49d0-aff6-4d45985913a6\") " Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.703857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.704010 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z" (OuterVolumeSpecName: "kube-api-access-hv89z") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "kube-api-access-hv89z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.720576 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.720626 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.726456 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory" (OuterVolumeSpecName: "inventory") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.726825 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.732993 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6e3bb0ce-18a1-49d0-aff6-4d45985913a6" (UID: "6e3bb0ce-18a1-49d0-aff6-4d45985913a6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.792929 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.792976 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.792991 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.793003 4806 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.793016 4806 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.793030 4806 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.793071 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv89z\" (UniqueName: \"kubernetes.io/projected/6e3bb0ce-18a1-49d0-aff6-4d45985913a6-kube-api-access-hv89z\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.967746 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" event={"ID":"6e3bb0ce-18a1-49d0-aff6-4d45985913a6","Type":"ContainerDied","Data":"4cb1e158807495c98d09b354ed755174c2e940e968ec4d6210b2ac8512a86be1"} Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.967784 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cb1e158807495c98d09b354ed755174c2e940e968ec4d6210b2ac8512a86be1" Nov 25 15:44:29 crc kubenswrapper[4806]: I1125 15:44:29.967815 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.406086 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:39 crc kubenswrapper[4806]: E1125 15:44:39.407171 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e3bb0ce-18a1-49d0-aff6-4d45985913a6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.407188 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e3bb0ce-18a1-49d0-aff6-4d45985913a6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.407453 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e3bb0ce-18a1-49d0-aff6-4d45985913a6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.409247 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.416194 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.510815 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.510927 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lsp\" (UniqueName: \"kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.511252 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.613053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lsp\" (UniqueName: \"kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.613170 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.613248 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.613928 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.614278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.638603 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lsp\" (UniqueName: \"kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp\") pod \"community-operators-2dkkk\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:39 crc kubenswrapper[4806]: I1125 15:44:39.735608 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:40 crc kubenswrapper[4806]: I1125 15:44:40.223636 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:41 crc kubenswrapper[4806]: I1125 15:44:41.116437 4806 generic.go:334] "Generic (PLEG): container finished" podID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerID="d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a" exitCode=0 Nov 25 15:44:41 crc kubenswrapper[4806]: I1125 15:44:41.116515 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerDied","Data":"d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a"} Nov 25 15:44:41 crc kubenswrapper[4806]: I1125 15:44:41.116819 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerStarted","Data":"47dcd47ebc42018725981b3f4a88d8909fa86b0a537fd5b36edfdc5c2fd4e384"} Nov 25 15:44:41 crc kubenswrapper[4806]: I1125 15:44:41.119140 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:44:43 crc kubenswrapper[4806]: I1125 15:44:43.147888 4806 generic.go:334] "Generic (PLEG): container finished" podID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerID="f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3" exitCode=0 Nov 25 15:44:43 crc kubenswrapper[4806]: I1125 15:44:43.147992 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerDied","Data":"f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3"} Nov 25 15:44:45 crc kubenswrapper[4806]: I1125 15:44:45.178172 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerStarted","Data":"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0"} Nov 25 15:44:45 crc kubenswrapper[4806]: I1125 15:44:45.203177 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2dkkk" podStartSLOduration=2.651390674 podStartE2EDuration="6.203143022s" podCreationTimestamp="2025-11-25 15:44:39 +0000 UTC" firstStartedPulling="2025-11-25 15:44:41.118917168 +0000 UTC m=+3113.771059579" lastFinishedPulling="2025-11-25 15:44:44.670669496 +0000 UTC m=+3117.322811927" observedRunningTime="2025-11-25 15:44:45.19777924 +0000 UTC m=+3117.849921671" watchObservedRunningTime="2025-11-25 15:44:45.203143022 +0000 UTC m=+3117.855285473" Nov 25 15:44:48 crc kubenswrapper[4806]: I1125 15:44:48.936621 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:44:48 crc kubenswrapper[4806]: I1125 15:44:48.937159 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:44:49 crc kubenswrapper[4806]: I1125 15:44:49.736025 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:49 crc kubenswrapper[4806]: I1125 15:44:49.736552 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:49 crc kubenswrapper[4806]: I1125 15:44:49.816642 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:50 crc kubenswrapper[4806]: I1125 15:44:50.313064 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:53 crc kubenswrapper[4806]: I1125 15:44:53.197341 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:53 crc kubenswrapper[4806]: I1125 15:44:53.272831 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2dkkk" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="registry-server" containerID="cri-o://1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0" gracePeriod=2 Nov 25 15:44:53 crc kubenswrapper[4806]: I1125 15:44:53.941568 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.057068 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lsp\" (UniqueName: \"kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp\") pod \"bda53d78-ceb5-42b1-bf12-c685966940b9\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.057160 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities\") pod \"bda53d78-ceb5-42b1-bf12-c685966940b9\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.057400 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content\") pod \"bda53d78-ceb5-42b1-bf12-c685966940b9\" (UID: \"bda53d78-ceb5-42b1-bf12-c685966940b9\") " Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.058836 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities" (OuterVolumeSpecName: "utilities") pod "bda53d78-ceb5-42b1-bf12-c685966940b9" (UID: "bda53d78-ceb5-42b1-bf12-c685966940b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.064857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp" (OuterVolumeSpecName: "kube-api-access-72lsp") pod "bda53d78-ceb5-42b1-bf12-c685966940b9" (UID: "bda53d78-ceb5-42b1-bf12-c685966940b9"). InnerVolumeSpecName "kube-api-access-72lsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.112166 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bda53d78-ceb5-42b1-bf12-c685966940b9" (UID: "bda53d78-ceb5-42b1-bf12-c685966940b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.161111 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.161139 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lsp\" (UniqueName: \"kubernetes.io/projected/bda53d78-ceb5-42b1-bf12-c685966940b9-kube-api-access-72lsp\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.161151 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bda53d78-ceb5-42b1-bf12-c685966940b9-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.285442 4806 generic.go:334] "Generic (PLEG): container finished" podID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerID="1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0" exitCode=0 Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.285489 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerDied","Data":"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0"} Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.285520 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2dkkk" event={"ID":"bda53d78-ceb5-42b1-bf12-c685966940b9","Type":"ContainerDied","Data":"47dcd47ebc42018725981b3f4a88d8909fa86b0a537fd5b36edfdc5c2fd4e384"} Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.285542 4806 scope.go:117] "RemoveContainer" containerID="1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.285549 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2dkkk" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.311441 4806 scope.go:117] "RemoveContainer" containerID="f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.324838 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.333710 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2dkkk"] Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.348902 4806 scope.go:117] "RemoveContainer" containerID="d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.398153 4806 scope.go:117] "RemoveContainer" containerID="1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0" Nov 25 15:44:54 crc kubenswrapper[4806]: E1125 15:44:54.398602 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0\": container with ID starting with 1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0 not found: ID does not exist" containerID="1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.398799 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0"} err="failed to get container status \"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0\": rpc error: code = NotFound desc = could not find container \"1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0\": container with ID starting with 1c78177826840d4b5f8836da2d4a869a6337a2e3ef3c07eda0f9e1bcc73fc9a0 not found: ID does not exist" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.398835 4806 scope.go:117] "RemoveContainer" containerID="f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3" Nov 25 15:44:54 crc kubenswrapper[4806]: E1125 15:44:54.399190 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3\": container with ID starting with f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3 not found: ID does not exist" containerID="f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.399216 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3"} err="failed to get container status \"f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3\": rpc error: code = NotFound desc = could not find container \"f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3\": container with ID starting with f2da0265f92a0d814c31de692ef4fdf015c291de15ee7329b67e786c606c0dc3 not found: ID does not exist" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.399229 4806 scope.go:117] "RemoveContainer" containerID="d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a" Nov 25 15:44:54 crc kubenswrapper[4806]: E1125 15:44:54.399562 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a\": container with ID starting with d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a not found: ID does not exist" containerID="d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a" Nov 25 15:44:54 crc kubenswrapper[4806]: I1125 15:44:54.399625 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a"} err="failed to get container status \"d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a\": rpc error: code = NotFound desc = could not find container \"d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a\": container with ID starting with d3539541e6dc16826f620024c07f499aa9284fe476aefa727d1be0aa594b805a not found: ID does not exist" Nov 25 15:44:56 crc kubenswrapper[4806]: I1125 15:44:56.102849 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" path="/var/lib/kubelet/pods/bda53d78-ceb5-42b1-bf12-c685966940b9/volumes" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.168900 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8"] Nov 25 15:45:00 crc kubenswrapper[4806]: E1125 15:45:00.169898 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="registry-server" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.169917 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="registry-server" Nov 25 15:45:00 crc kubenswrapper[4806]: E1125 15:45:00.169948 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="extract-utilities" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.169958 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="extract-utilities" Nov 25 15:45:00 crc kubenswrapper[4806]: E1125 15:45:00.169979 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="extract-content" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.169986 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="extract-content" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.170272 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda53d78-ceb5-42b1-bf12-c685966940b9" containerName="registry-server" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.171453 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.173485 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.174016 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.182812 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8"] Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.302365 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.302535 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.302571 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrgbl\" (UniqueName: \"kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.405108 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.405358 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.405406 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrgbl\" (UniqueName: \"kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.406240 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.412892 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.435160 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrgbl\" (UniqueName: \"kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl\") pod \"collect-profiles-29401425-rkch8\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.504815 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:00 crc kubenswrapper[4806]: I1125 15:45:00.990168 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8"] Nov 25 15:45:01 crc kubenswrapper[4806]: I1125 15:45:01.356846 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" event={"ID":"026f9023-1e20-4c49-b9ca-75aad6f5680d","Type":"ContainerStarted","Data":"12b06ce456a5580aef16f25464697f259a2807d64c5bcc385940e7b5ce031b74"} Nov 25 15:45:01 crc kubenswrapper[4806]: I1125 15:45:01.357175 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" event={"ID":"026f9023-1e20-4c49-b9ca-75aad6f5680d","Type":"ContainerStarted","Data":"8c263931794ee402e67cf0ab42d8c222f58490737b227c69042726d233daad9f"} Nov 25 15:45:01 crc kubenswrapper[4806]: I1125 15:45:01.382141 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" podStartSLOduration=1.382119562 podStartE2EDuration="1.382119562s" podCreationTimestamp="2025-11-25 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:45:01.372601843 +0000 UTC m=+3134.024744254" watchObservedRunningTime="2025-11-25 15:45:01.382119562 +0000 UTC m=+3134.034261973" Nov 25 15:45:02 crc kubenswrapper[4806]: I1125 15:45:02.372592 4806 generic.go:334] "Generic (PLEG): container finished" podID="026f9023-1e20-4c49-b9ca-75aad6f5680d" containerID="12b06ce456a5580aef16f25464697f259a2807d64c5bcc385940e7b5ce031b74" exitCode=0 Nov 25 15:45:02 crc kubenswrapper[4806]: I1125 15:45:02.372654 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" event={"ID":"026f9023-1e20-4c49-b9ca-75aad6f5680d","Type":"ContainerDied","Data":"12b06ce456a5580aef16f25464697f259a2807d64c5bcc385940e7b5ce031b74"} Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.781392 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.878583 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume\") pod \"026f9023-1e20-4c49-b9ca-75aad6f5680d\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.878859 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume\") pod \"026f9023-1e20-4c49-b9ca-75aad6f5680d\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.878942 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrgbl\" (UniqueName: \"kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl\") pod \"026f9023-1e20-4c49-b9ca-75aad6f5680d\" (UID: \"026f9023-1e20-4c49-b9ca-75aad6f5680d\") " Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.879532 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume" (OuterVolumeSpecName: "config-volume") pod "026f9023-1e20-4c49-b9ca-75aad6f5680d" (UID: "026f9023-1e20-4c49-b9ca-75aad6f5680d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.885454 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "026f9023-1e20-4c49-b9ca-75aad6f5680d" (UID: "026f9023-1e20-4c49-b9ca-75aad6f5680d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.885633 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl" (OuterVolumeSpecName: "kube-api-access-lrgbl") pod "026f9023-1e20-4c49-b9ca-75aad6f5680d" (UID: "026f9023-1e20-4c49-b9ca-75aad6f5680d"). InnerVolumeSpecName "kube-api-access-lrgbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.982019 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/026f9023-1e20-4c49-b9ca-75aad6f5680d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.982382 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/026f9023-1e20-4c49-b9ca-75aad6f5680d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:03 crc kubenswrapper[4806]: I1125 15:45:03.982391 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrgbl\" (UniqueName: \"kubernetes.io/projected/026f9023-1e20-4c49-b9ca-75aad6f5680d-kube-api-access-lrgbl\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:04 crc kubenswrapper[4806]: I1125 15:45:04.397710 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" event={"ID":"026f9023-1e20-4c49-b9ca-75aad6f5680d","Type":"ContainerDied","Data":"8c263931794ee402e67cf0ab42d8c222f58490737b227c69042726d233daad9f"} Nov 25 15:45:04 crc kubenswrapper[4806]: I1125 15:45:04.397773 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c263931794ee402e67cf0ab42d8c222f58490737b227c69042726d233daad9f" Nov 25 15:45:04 crc kubenswrapper[4806]: I1125 15:45:04.397806 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-rkch8" Nov 25 15:45:04 crc kubenswrapper[4806]: I1125 15:45:04.455588 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c"] Nov 25 15:45:04 crc kubenswrapper[4806]: I1125 15:45:04.465420 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-klm5c"] Nov 25 15:45:06 crc kubenswrapper[4806]: I1125 15:45:06.102252 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52601663-98d0-43a4-ab46-0f671d08c3bd" path="/var/lib/kubelet/pods/52601663-98d0-43a4-ab46-0f671d08c3bd/volumes" Nov 25 15:45:14 crc kubenswrapper[4806]: E1125 15:45:14.496459 4806 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.234:52494->38.102.83.234:43935: write tcp 38.102.83.234:52494->38.102.83.234:43935: write: connection reset by peer Nov 25 15:45:18 crc kubenswrapper[4806]: I1125 15:45:18.935031 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:45:18 crc kubenswrapper[4806]: I1125 15:45:18.936102 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:45:18 crc kubenswrapper[4806]: I1125 15:45:18.936183 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:45:18 crc kubenswrapper[4806]: I1125 15:45:18.937279 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:45:18 crc kubenswrapper[4806]: I1125 15:45:18.937399 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" gracePeriod=600 Nov 25 15:45:19 crc kubenswrapper[4806]: E1125 15:45:19.057564 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:45:19 crc kubenswrapper[4806]: I1125 15:45:19.556032 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" exitCode=0 Nov 25 15:45:19 crc kubenswrapper[4806]: I1125 15:45:19.556109 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5"} Nov 25 15:45:19 crc kubenswrapper[4806]: I1125 15:45:19.556404 4806 scope.go:117] "RemoveContainer" containerID="8e92935482a5f92e9ebc3fbbdbdc44dc56af2d1072c382ebac551c11833e7734" Nov 25 15:45:19 crc kubenswrapper[4806]: I1125 15:45:19.557533 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:45:19 crc kubenswrapper[4806]: E1125 15:45:19.558279 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:45:19 crc kubenswrapper[4806]: I1125 15:45:19.817737 4806 scope.go:117] "RemoveContainer" containerID="25a4014bc9ea1641aba8f9efa644752d9346211d0a7d73595265635a38272cab" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.162791 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:45:28 crc kubenswrapper[4806]: E1125 15:45:28.163759 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026f9023-1e20-4c49-b9ca-75aad6f5680d" containerName="collect-profiles" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.163776 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="026f9023-1e20-4c49-b9ca-75aad6f5680d" containerName="collect-profiles" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.164133 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="026f9023-1e20-4c49-b9ca-75aad6f5680d" containerName="collect-profiles" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.164908 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.167586 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.168122 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.168161 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-66pxf" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.168208 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.187257 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.318792 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pg9c\" (UniqueName: \"kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.318856 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319061 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319358 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319503 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319552 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319679 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319750 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.319863 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422278 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pg9c\" (UniqueName: \"kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422463 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422574 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422719 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422814 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422863 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.422935 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.423004 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.423084 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.423109 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.424953 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.425153 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.425593 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.430427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.432477 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.433174 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.436117 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.455594 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pg9c\" (UniqueName: \"kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.459163 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.495770 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:45:28 crc kubenswrapper[4806]: I1125 15:45:28.998201 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 25 15:45:29 crc kubenswrapper[4806]: I1125 15:45:29.688899 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2ac30dde-ccba-4cb3-a2e4-540d47610c83","Type":"ContainerStarted","Data":"3b41f0867d1fbaebbc3c7b3f624472bec783dc9c7111671bc21a92090211a4f4"} Nov 25 15:45:32 crc kubenswrapper[4806]: I1125 15:45:32.090181 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:45:32 crc kubenswrapper[4806]: E1125 15:45:32.090743 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:45:44 crc kubenswrapper[4806]: I1125 15:45:44.089208 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:45:44 crc kubenswrapper[4806]: E1125 15:45:44.090177 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:45:57 crc kubenswrapper[4806]: I1125 15:45:57.089469 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:45:57 crc kubenswrapper[4806]: E1125 15:45:57.090095 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:46:03 crc kubenswrapper[4806]: E1125 15:46:03.889310 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 25 15:46:03 crc kubenswrapper[4806]: E1125 15:46:03.890184 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8pg9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2ac30dde-ccba-4cb3-a2e4-540d47610c83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:46:03 crc kubenswrapper[4806]: E1125 15:46:03.891482 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" Nov 25 15:46:04 crc kubenswrapper[4806]: E1125 15:46:04.120351 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" Nov 25 15:46:10 crc kubenswrapper[4806]: I1125 15:46:10.089740 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:46:10 crc kubenswrapper[4806]: E1125 15:46:10.090427 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:46:17 crc kubenswrapper[4806]: I1125 15:46:17.549568 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 25 15:46:19 crc kubenswrapper[4806]: I1125 15:46:19.281091 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2ac30dde-ccba-4cb3-a2e4-540d47610c83","Type":"ContainerStarted","Data":"85386a81b161fff985daa8e811b06728388ba207a6e9a641d3dffbf7bb2036c5"} Nov 25 15:46:19 crc kubenswrapper[4806]: I1125 15:46:19.309291 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.758733893 podStartE2EDuration="52.309256581s" podCreationTimestamp="2025-11-25 15:45:27 +0000 UTC" firstStartedPulling="2025-11-25 15:45:28.996574761 +0000 UTC m=+3161.648717182" lastFinishedPulling="2025-11-25 15:46:17.547097459 +0000 UTC m=+3210.199239870" observedRunningTime="2025-11-25 15:46:19.294280047 +0000 UTC m=+3211.946422458" watchObservedRunningTime="2025-11-25 15:46:19.309256581 +0000 UTC m=+3211.961399002" Nov 25 15:46:21 crc kubenswrapper[4806]: I1125 15:46:21.358052 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:46:21 crc kubenswrapper[4806]: E1125 15:46:21.358952 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:46:35 crc kubenswrapper[4806]: I1125 15:46:35.090150 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:46:35 crc kubenswrapper[4806]: E1125 15:46:35.091101 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:46:48 crc kubenswrapper[4806]: I1125 15:46:48.112568 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:46:48 crc kubenswrapper[4806]: E1125 15:46:48.113735 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:47:03 crc kubenswrapper[4806]: I1125 15:47:03.089048 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:47:03 crc kubenswrapper[4806]: E1125 15:47:03.089886 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:47:14 crc kubenswrapper[4806]: I1125 15:47:14.096558 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:47:14 crc kubenswrapper[4806]: E1125 15:47:14.097301 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:47:28 crc kubenswrapper[4806]: I1125 15:47:28.100293 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:47:28 crc kubenswrapper[4806]: E1125 15:47:28.101206 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:47:40 crc kubenswrapper[4806]: I1125 15:47:40.090178 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:47:40 crc kubenswrapper[4806]: E1125 15:47:40.090962 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:47:54 crc kubenswrapper[4806]: I1125 15:47:54.089650 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:47:54 crc kubenswrapper[4806]: E1125 15:47:54.092445 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:48:05 crc kubenswrapper[4806]: I1125 15:48:05.089430 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:48:05 crc kubenswrapper[4806]: E1125 15:48:05.090426 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:48:19 crc kubenswrapper[4806]: I1125 15:48:19.089404 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:48:19 crc kubenswrapper[4806]: E1125 15:48:19.090293 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:48:32 crc kubenswrapper[4806]: I1125 15:48:32.090296 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:48:32 crc kubenswrapper[4806]: E1125 15:48:32.091036 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:48:47 crc kubenswrapper[4806]: I1125 15:48:47.089721 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:48:47 crc kubenswrapper[4806]: E1125 15:48:47.091653 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:01 crc kubenswrapper[4806]: I1125 15:49:01.089402 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:49:01 crc kubenswrapper[4806]: E1125 15:49:01.090276 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:15 crc kubenswrapper[4806]: I1125 15:49:15.089345 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:49:15 crc kubenswrapper[4806]: E1125 15:49:15.089966 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:26 crc kubenswrapper[4806]: I1125 15:49:26.089647 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:49:26 crc kubenswrapper[4806]: E1125 15:49:26.090436 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.100989 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.104370 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.112018 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.282873 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.283289 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.283348 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwszv\" (UniqueName: \"kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.384922 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.385010 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwszv\" (UniqueName: \"kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.385137 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.385436 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.385771 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.405310 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwszv\" (UniqueName: \"kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv\") pod \"redhat-operators-lhv77\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.440634 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:31 crc kubenswrapper[4806]: I1125 15:49:31.988075 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:49:32 crc kubenswrapper[4806]: I1125 15:49:32.361563 4806 generic.go:334] "Generic (PLEG): container finished" podID="4954136c-d883-4181-9830-44af6ee07e51" containerID="c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19" exitCode=0 Nov 25 15:49:32 crc kubenswrapper[4806]: I1125 15:49:32.361612 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerDied","Data":"c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19"} Nov 25 15:49:32 crc kubenswrapper[4806]: I1125 15:49:32.361641 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerStarted","Data":"7e3f1314fe8bde05915cc9d82f4f2e41e58172ef5b911db91d66a9dd6a801d77"} Nov 25 15:49:34 crc kubenswrapper[4806]: I1125 15:49:34.381500 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerStarted","Data":"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059"} Nov 25 15:49:38 crc kubenswrapper[4806]: I1125 15:49:38.097334 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:49:38 crc kubenswrapper[4806]: E1125 15:49:38.098122 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:42 crc kubenswrapper[4806]: I1125 15:49:42.473168 4806 generic.go:334] "Generic (PLEG): container finished" podID="4954136c-d883-4181-9830-44af6ee07e51" containerID="ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059" exitCode=0 Nov 25 15:49:42 crc kubenswrapper[4806]: I1125 15:49:42.473264 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerDied","Data":"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059"} Nov 25 15:49:42 crc kubenswrapper[4806]: I1125 15:49:42.476644 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:49:43 crc kubenswrapper[4806]: I1125 15:49:43.502052 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerStarted","Data":"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab"} Nov 25 15:49:43 crc kubenswrapper[4806]: I1125 15:49:43.527440 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lhv77" podStartSLOduration=1.860875412 podStartE2EDuration="12.527420126s" podCreationTimestamp="2025-11-25 15:49:31 +0000 UTC" firstStartedPulling="2025-11-25 15:49:32.363279735 +0000 UTC m=+3405.015422146" lastFinishedPulling="2025-11-25 15:49:43.029824429 +0000 UTC m=+3415.681966860" observedRunningTime="2025-11-25 15:49:43.520096498 +0000 UTC m=+3416.172238949" watchObservedRunningTime="2025-11-25 15:49:43.527420126 +0000 UTC m=+3416.179562547" Nov 25 15:49:51 crc kubenswrapper[4806]: I1125 15:49:51.441494 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:51 crc kubenswrapper[4806]: I1125 15:49:51.443063 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:49:52 crc kubenswrapper[4806]: I1125 15:49:52.089798 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:49:52 crc kubenswrapper[4806]: E1125 15:49:52.090166 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:49:52 crc kubenswrapper[4806]: I1125 15:49:52.496344 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhv77" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" probeResult="failure" output=< Nov 25 15:49:52 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:49:52 crc kubenswrapper[4806]: > Nov 25 15:50:02 crc kubenswrapper[4806]: I1125 15:50:02.502839 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhv77" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" probeResult="failure" output=< Nov 25 15:50:02 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:50:02 crc kubenswrapper[4806]: > Nov 25 15:50:06 crc kubenswrapper[4806]: I1125 15:50:06.090508 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:50:06 crc kubenswrapper[4806]: E1125 15:50:06.091127 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:50:12 crc kubenswrapper[4806]: I1125 15:50:12.493653 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lhv77" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" probeResult="failure" output=< Nov 25 15:50:12 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 15:50:12 crc kubenswrapper[4806]: > Nov 25 15:50:17 crc kubenswrapper[4806]: I1125 15:50:17.089597 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:50:17 crc kubenswrapper[4806]: E1125 15:50:17.090227 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:50:21 crc kubenswrapper[4806]: I1125 15:50:21.499396 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:50:21 crc kubenswrapper[4806]: I1125 15:50:21.584123 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:50:21 crc kubenswrapper[4806]: I1125 15:50:21.741405 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:50:22 crc kubenswrapper[4806]: I1125 15:50:22.901488 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lhv77" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" containerID="cri-o://0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab" gracePeriod=2 Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.643522 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.715746 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content\") pod \"4954136c-d883-4181-9830-44af6ee07e51\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.715936 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwszv\" (UniqueName: \"kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv\") pod \"4954136c-d883-4181-9830-44af6ee07e51\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.715998 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities\") pod \"4954136c-d883-4181-9830-44af6ee07e51\" (UID: \"4954136c-d883-4181-9830-44af6ee07e51\") " Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.717243 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities" (OuterVolumeSpecName: "utilities") pod "4954136c-d883-4181-9830-44af6ee07e51" (UID: "4954136c-d883-4181-9830-44af6ee07e51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.722063 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv" (OuterVolumeSpecName: "kube-api-access-lwszv") pod "4954136c-d883-4181-9830-44af6ee07e51" (UID: "4954136c-d883-4181-9830-44af6ee07e51"). InnerVolumeSpecName "kube-api-access-lwszv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.819202 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwszv\" (UniqueName: \"kubernetes.io/projected/4954136c-d883-4181-9830-44af6ee07e51-kube-api-access-lwszv\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.819256 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.823066 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4954136c-d883-4181-9830-44af6ee07e51" (UID: "4954136c-d883-4181-9830-44af6ee07e51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.914251 4806 generic.go:334] "Generic (PLEG): container finished" podID="4954136c-d883-4181-9830-44af6ee07e51" containerID="0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab" exitCode=0 Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.914587 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerDied","Data":"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab"} Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.914621 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lhv77" event={"ID":"4954136c-d883-4181-9830-44af6ee07e51","Type":"ContainerDied","Data":"7e3f1314fe8bde05915cc9d82f4f2e41e58172ef5b911db91d66a9dd6a801d77"} Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.914644 4806 scope.go:117] "RemoveContainer" containerID="0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.914815 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lhv77" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.921629 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4954136c-d883-4181-9830-44af6ee07e51-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.971008 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.987085 4806 scope.go:117] "RemoveContainer" containerID="ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059" Nov 25 15:50:23 crc kubenswrapper[4806]: I1125 15:50:23.988588 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lhv77"] Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.023575 4806 scope.go:117] "RemoveContainer" containerID="c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.081971 4806 scope.go:117] "RemoveContainer" containerID="0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab" Nov 25 15:50:24 crc kubenswrapper[4806]: E1125 15:50:24.082535 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab\": container with ID starting with 0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab not found: ID does not exist" containerID="0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.082594 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab"} err="failed to get container status \"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab\": rpc error: code = NotFound desc = could not find container \"0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab\": container with ID starting with 0b2c91de4eba50ec117bcdac211e9dd52a4c06993d120640811d220943764fab not found: ID does not exist" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.082640 4806 scope.go:117] "RemoveContainer" containerID="ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059" Nov 25 15:50:24 crc kubenswrapper[4806]: E1125 15:50:24.082892 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059\": container with ID starting with ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059 not found: ID does not exist" containerID="ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.082919 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059"} err="failed to get container status \"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059\": rpc error: code = NotFound desc = could not find container \"ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059\": container with ID starting with ff48bae23346197b43055da253475a9d901d492d21a152abde1afdd8b7f41059 not found: ID does not exist" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.082956 4806 scope.go:117] "RemoveContainer" containerID="c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19" Nov 25 15:50:24 crc kubenswrapper[4806]: E1125 15:50:24.083217 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19\": container with ID starting with c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19 not found: ID does not exist" containerID="c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.083242 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19"} err="failed to get container status \"c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19\": rpc error: code = NotFound desc = could not find container \"c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19\": container with ID starting with c0c1bf2b37c8c320fbdaec97632a85397f1b8c141d80b2d3203cd51f1e2a5a19 not found: ID does not exist" Nov 25 15:50:24 crc kubenswrapper[4806]: I1125 15:50:24.103268 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4954136c-d883-4181-9830-44af6ee07e51" path="/var/lib/kubelet/pods/4954136c-d883-4181-9830-44af6ee07e51/volumes" Nov 25 15:50:32 crc kubenswrapper[4806]: I1125 15:50:32.089537 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:50:33 crc kubenswrapper[4806]: I1125 15:50:33.018610 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e"} Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.115462 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:50:55 crc kubenswrapper[4806]: E1125 15:50:55.116489 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.116507 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" Nov 25 15:50:55 crc kubenswrapper[4806]: E1125 15:50:55.116544 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="extract-content" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.116550 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="extract-content" Nov 25 15:50:55 crc kubenswrapper[4806]: E1125 15:50:55.116593 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="extract-utilities" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.116602 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="extract-utilities" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.116858 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="4954136c-d883-4181-9830-44af6ee07e51" containerName="registry-server" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.118763 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.129981 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.243452 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.243844 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.243896 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsrzl\" (UniqueName: \"kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.345967 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.346066 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.346140 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsrzl\" (UniqueName: \"kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.346607 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.346681 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.366292 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsrzl\" (UniqueName: \"kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl\") pod \"certified-operators-5kjkp\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:55 crc kubenswrapper[4806]: I1125 15:50:55.447781 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:50:56 crc kubenswrapper[4806]: I1125 15:50:56.005612 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:50:56 crc kubenswrapper[4806]: I1125 15:50:56.284566 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerStarted","Data":"93b391ca181a970d8d88425886abcce7b2b082db68b3c6455ed086d94063e88e"} Nov 25 15:50:57 crc kubenswrapper[4806]: I1125 15:50:57.298240 4806 generic.go:334] "Generic (PLEG): container finished" podID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerID="f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb" exitCode=0 Nov 25 15:50:57 crc kubenswrapper[4806]: I1125 15:50:57.299141 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerDied","Data":"f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb"} Nov 25 15:50:59 crc kubenswrapper[4806]: I1125 15:50:59.326538 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerStarted","Data":"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f"} Nov 25 15:51:01 crc kubenswrapper[4806]: I1125 15:51:01.345395 4806 generic.go:334] "Generic (PLEG): container finished" podID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerID="4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f" exitCode=0 Nov 25 15:51:01 crc kubenswrapper[4806]: I1125 15:51:01.345484 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerDied","Data":"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f"} Nov 25 15:51:02 crc kubenswrapper[4806]: I1125 15:51:02.357725 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerStarted","Data":"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535"} Nov 25 15:51:02 crc kubenswrapper[4806]: I1125 15:51:02.377589 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kjkp" podStartSLOduration=2.846333871 podStartE2EDuration="7.37756805s" podCreationTimestamp="2025-11-25 15:50:55 +0000 UTC" firstStartedPulling="2025-11-25 15:50:57.301845539 +0000 UTC m=+3489.953987950" lastFinishedPulling="2025-11-25 15:51:01.833079718 +0000 UTC m=+3494.485222129" observedRunningTime="2025-11-25 15:51:02.372017138 +0000 UTC m=+3495.024159559" watchObservedRunningTime="2025-11-25 15:51:02.37756805 +0000 UTC m=+3495.029710461" Nov 25 15:51:05 crc kubenswrapper[4806]: I1125 15:51:05.448142 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:05 crc kubenswrapper[4806]: I1125 15:51:05.448764 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:05 crc kubenswrapper[4806]: I1125 15:51:05.510486 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:15 crc kubenswrapper[4806]: I1125 15:51:15.504689 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:15 crc kubenswrapper[4806]: I1125 15:51:15.566103 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:51:16 crc kubenswrapper[4806]: I1125 15:51:16.506546 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kjkp" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="registry-server" containerID="cri-o://05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535" gracePeriod=2 Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.134998 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.243109 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsrzl\" (UniqueName: \"kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl\") pod \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.243431 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities\") pod \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.243613 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content\") pod \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\" (UID: \"3e103a9d-b9b6-4c13-89b8-1139dc57190f\") " Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.244711 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities" (OuterVolumeSpecName: "utilities") pod "3e103a9d-b9b6-4c13-89b8-1139dc57190f" (UID: "3e103a9d-b9b6-4c13-89b8-1139dc57190f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.249955 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl" (OuterVolumeSpecName: "kube-api-access-tsrzl") pod "3e103a9d-b9b6-4c13-89b8-1139dc57190f" (UID: "3e103a9d-b9b6-4c13-89b8-1139dc57190f"). InnerVolumeSpecName "kube-api-access-tsrzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.319075 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e103a9d-b9b6-4c13-89b8-1139dc57190f" (UID: "3e103a9d-b9b6-4c13-89b8-1139dc57190f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.346709 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.346750 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e103a9d-b9b6-4c13-89b8-1139dc57190f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.346772 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsrzl\" (UniqueName: \"kubernetes.io/projected/3e103a9d-b9b6-4c13-89b8-1139dc57190f-kube-api-access-tsrzl\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.522662 4806 generic.go:334] "Generic (PLEG): container finished" podID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerID="05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535" exitCode=0 Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.522762 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kjkp" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.522744 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerDied","Data":"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535"} Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.522961 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kjkp" event={"ID":"3e103a9d-b9b6-4c13-89b8-1139dc57190f","Type":"ContainerDied","Data":"93b391ca181a970d8d88425886abcce7b2b082db68b3c6455ed086d94063e88e"} Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.522998 4806 scope.go:117] "RemoveContainer" containerID="05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.547601 4806 scope.go:117] "RemoveContainer" containerID="4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.563652 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.571574 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kjkp"] Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.587433 4806 scope.go:117] "RemoveContainer" containerID="f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.622454 4806 scope.go:117] "RemoveContainer" containerID="05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535" Nov 25 15:51:17 crc kubenswrapper[4806]: E1125 15:51:17.623092 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535\": container with ID starting with 05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535 not found: ID does not exist" containerID="05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.623150 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535"} err="failed to get container status \"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535\": rpc error: code = NotFound desc = could not find container \"05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535\": container with ID starting with 05a516aa3c9803e103973f188cbb15997ad9dd37db84185efccc9f72909fb535 not found: ID does not exist" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.623190 4806 scope.go:117] "RemoveContainer" containerID="4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f" Nov 25 15:51:17 crc kubenswrapper[4806]: E1125 15:51:17.623858 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f\": container with ID starting with 4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f not found: ID does not exist" containerID="4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.623887 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f"} err="failed to get container status \"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f\": rpc error: code = NotFound desc = could not find container \"4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f\": container with ID starting with 4b3a1b4f961db6eed4298807e5aaa41e0fa2501fe0a7824dc209b2b6490b760f not found: ID does not exist" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.623906 4806 scope.go:117] "RemoveContainer" containerID="f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb" Nov 25 15:51:17 crc kubenswrapper[4806]: E1125 15:51:17.624330 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb\": container with ID starting with f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb not found: ID does not exist" containerID="f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb" Nov 25 15:51:17 crc kubenswrapper[4806]: I1125 15:51:17.624395 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb"} err="failed to get container status \"f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb\": rpc error: code = NotFound desc = could not find container \"f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb\": container with ID starting with f383f853568cbd0083cdc1a8e9b35b22af5442121274ead30f0d5790e241b8cb not found: ID does not exist" Nov 25 15:51:18 crc kubenswrapper[4806]: I1125 15:51:18.104878 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" path="/var/lib/kubelet/pods/3e103a9d-b9b6-4c13-89b8-1139dc57190f/volumes" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.528410 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:26 crc kubenswrapper[4806]: E1125 15:51:26.529657 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="extract-content" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.529677 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="extract-content" Nov 25 15:51:26 crc kubenswrapper[4806]: E1125 15:51:26.529694 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="extract-utilities" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.529702 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="extract-utilities" Nov 25 15:51:26 crc kubenswrapper[4806]: E1125 15:51:26.529718 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="registry-server" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.529726 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="registry-server" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.529956 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e103a9d-b9b6-4c13-89b8-1139dc57190f" containerName="registry-server" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.532243 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.543061 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.636244 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-582kd\" (UniqueName: \"kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.636498 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.636543 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.738488 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-582kd\" (UniqueName: \"kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.738944 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.738980 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.739406 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.739464 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.768470 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-582kd\" (UniqueName: \"kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd\") pod \"redhat-marketplace-kg9c6\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:26 crc kubenswrapper[4806]: I1125 15:51:26.859852 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:27 crc kubenswrapper[4806]: I1125 15:51:27.346022 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:27 crc kubenswrapper[4806]: I1125 15:51:27.627286 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerStarted","Data":"30c092f04d80e0bb15f13be764cfea8dafcb744ac3b64b4f17f79b994950fbe6"} Nov 25 15:51:28 crc kubenswrapper[4806]: I1125 15:51:28.638037 4806 generic.go:334] "Generic (PLEG): container finished" podID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerID="a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696" exitCode=0 Nov 25 15:51:28 crc kubenswrapper[4806]: I1125 15:51:28.638076 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerDied","Data":"a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696"} Nov 25 15:51:29 crc kubenswrapper[4806]: I1125 15:51:29.649348 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerStarted","Data":"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0"} Nov 25 15:51:30 crc kubenswrapper[4806]: I1125 15:51:30.660822 4806 generic.go:334] "Generic (PLEG): container finished" podID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerID="007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0" exitCode=0 Nov 25 15:51:30 crc kubenswrapper[4806]: I1125 15:51:30.660931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerDied","Data":"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0"} Nov 25 15:51:31 crc kubenswrapper[4806]: I1125 15:51:31.673573 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerStarted","Data":"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119"} Nov 25 15:51:31 crc kubenswrapper[4806]: I1125 15:51:31.697808 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kg9c6" podStartSLOduration=3.137448921 podStartE2EDuration="5.697784014s" podCreationTimestamp="2025-11-25 15:51:26 +0000 UTC" firstStartedPulling="2025-11-25 15:51:28.640534038 +0000 UTC m=+3521.292676449" lastFinishedPulling="2025-11-25 15:51:31.200869131 +0000 UTC m=+3523.853011542" observedRunningTime="2025-11-25 15:51:31.696374916 +0000 UTC m=+3524.348517327" watchObservedRunningTime="2025-11-25 15:51:31.697784014 +0000 UTC m=+3524.349926425" Nov 25 15:51:36 crc kubenswrapper[4806]: I1125 15:51:36.860888 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:36 crc kubenswrapper[4806]: I1125 15:51:36.861479 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:36 crc kubenswrapper[4806]: I1125 15:51:36.922451 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:37 crc kubenswrapper[4806]: I1125 15:51:37.818872 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:37 crc kubenswrapper[4806]: I1125 15:51:37.872152 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:39 crc kubenswrapper[4806]: I1125 15:51:39.778784 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kg9c6" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="registry-server" containerID="cri-o://f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119" gracePeriod=2 Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.406687 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.473719 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-582kd\" (UniqueName: \"kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd\") pod \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.473803 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities\") pod \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.473846 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content\") pod \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\" (UID: \"7741a4b6-d6e3-43f3-8b11-c466de1cdedf\") " Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.474593 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities" (OuterVolumeSpecName: "utilities") pod "7741a4b6-d6e3-43f3-8b11-c466de1cdedf" (UID: "7741a4b6-d6e3-43f3-8b11-c466de1cdedf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.474814 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.478888 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd" (OuterVolumeSpecName: "kube-api-access-582kd") pod "7741a4b6-d6e3-43f3-8b11-c466de1cdedf" (UID: "7741a4b6-d6e3-43f3-8b11-c466de1cdedf"). InnerVolumeSpecName "kube-api-access-582kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.499941 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7741a4b6-d6e3-43f3-8b11-c466de1cdedf" (UID: "7741a4b6-d6e3-43f3-8b11-c466de1cdedf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.576395 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-582kd\" (UniqueName: \"kubernetes.io/projected/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-kube-api-access-582kd\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.576454 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7741a4b6-d6e3-43f3-8b11-c466de1cdedf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.789288 4806 generic.go:334] "Generic (PLEG): container finished" podID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerID="f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119" exitCode=0 Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.789353 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerDied","Data":"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119"} Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.789374 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kg9c6" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.789386 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kg9c6" event={"ID":"7741a4b6-d6e3-43f3-8b11-c466de1cdedf","Type":"ContainerDied","Data":"30c092f04d80e0bb15f13be764cfea8dafcb744ac3b64b4f17f79b994950fbe6"} Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.789408 4806 scope.go:117] "RemoveContainer" containerID="f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.813470 4806 scope.go:117] "RemoveContainer" containerID="007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.826177 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.838876 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kg9c6"] Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.854537 4806 scope.go:117] "RemoveContainer" containerID="a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.891173 4806 scope.go:117] "RemoveContainer" containerID="f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119" Nov 25 15:51:40 crc kubenswrapper[4806]: E1125 15:51:40.891796 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119\": container with ID starting with f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119 not found: ID does not exist" containerID="f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.891848 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119"} err="failed to get container status \"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119\": rpc error: code = NotFound desc = could not find container \"f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119\": container with ID starting with f447559215170c2da11a4d74383ce083289e4c97ee2c9e3f8ba59696bc818119 not found: ID does not exist" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.891879 4806 scope.go:117] "RemoveContainer" containerID="007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0" Nov 25 15:51:40 crc kubenswrapper[4806]: E1125 15:51:40.892268 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0\": container with ID starting with 007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0 not found: ID does not exist" containerID="007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.892330 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0"} err="failed to get container status \"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0\": rpc error: code = NotFound desc = could not find container \"007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0\": container with ID starting with 007f6aac35c40c55043d1596c67fb29a6f4bc025bf4d1420a51fc4d3370a8dd0 not found: ID does not exist" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.892354 4806 scope.go:117] "RemoveContainer" containerID="a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696" Nov 25 15:51:40 crc kubenswrapper[4806]: E1125 15:51:40.892661 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696\": container with ID starting with a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696 not found: ID does not exist" containerID="a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696" Nov 25 15:51:40 crc kubenswrapper[4806]: I1125 15:51:40.892741 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696"} err="failed to get container status \"a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696\": rpc error: code = NotFound desc = could not find container \"a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696\": container with ID starting with a6d98402305cc30428b0a59868cc2edd77dd4470c50c8410f9bbf2c0f6b2a696 not found: ID does not exist" Nov 25 15:51:42 crc kubenswrapper[4806]: I1125 15:51:42.101184 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" path="/var/lib/kubelet/pods/7741a4b6-d6e3-43f3-8b11-c466de1cdedf/volumes" Nov 25 15:51:57 crc kubenswrapper[4806]: I1125 15:51:57.976012 4806 generic.go:334] "Generic (PLEG): container finished" podID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" containerID="85386a81b161fff985daa8e811b06728388ba207a6e9a641d3dffbf7bb2036c5" exitCode=0 Nov 25 15:51:57 crc kubenswrapper[4806]: I1125 15:51:57.976120 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2ac30dde-ccba-4cb3-a2e4-540d47610c83","Type":"ContainerDied","Data":"85386a81b161fff985daa8e811b06728388ba207a6e9a641d3dffbf7bb2036c5"} Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.502921 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.688916 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.688994 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689025 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689093 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689209 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689282 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pg9c\" (UniqueName: \"kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689331 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689364 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689407 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs\") pod \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\" (UID: \"2ac30dde-ccba-4cb3-a2e4-540d47610c83\") " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.689770 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.690220 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.691034 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data" (OuterVolumeSpecName: "config-data") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.696304 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c" (OuterVolumeSpecName: "kube-api-access-8pg9c") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "kube-api-access-8pg9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.696867 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.722155 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.723644 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.724909 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.753868 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.800955 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806497 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pg9c\" (UniqueName: \"kubernetes.io/projected/2ac30dde-ccba-4cb3-a2e4-540d47610c83-kube-api-access-8pg9c\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806550 4806 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806567 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806590 4806 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806631 4806 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.806649 4806 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2ac30dde-ccba-4cb3-a2e4-540d47610c83-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.834520 4806 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.908420 4806 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.996071 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2ac30dde-ccba-4cb3-a2e4-540d47610c83","Type":"ContainerDied","Data":"3b41f0867d1fbaebbc3c7b3f624472bec783dc9c7111671bc21a92090211a4f4"} Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.996339 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b41f0867d1fbaebbc3c7b3f624472bec783dc9c7111671bc21a92090211a4f4" Nov 25 15:51:59 crc kubenswrapper[4806]: I1125 15:51:59.996166 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 25 15:52:00 crc kubenswrapper[4806]: I1125 15:52:00.107857 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2ac30dde-ccba-4cb3-a2e4-540d47610c83" (UID: "2ac30dde-ccba-4cb3-a2e4-540d47610c83"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:52:00 crc kubenswrapper[4806]: I1125 15:52:00.113185 4806 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2ac30dde-ccba-4cb3-a2e4-540d47610c83-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.451019 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:52:08 crc kubenswrapper[4806]: E1125 15:52:08.452188 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="extract-content" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452207 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="extract-content" Nov 25 15:52:08 crc kubenswrapper[4806]: E1125 15:52:08.452227 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="extract-utilities" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452236 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="extract-utilities" Nov 25 15:52:08 crc kubenswrapper[4806]: E1125 15:52:08.452265 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452275 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:52:08 crc kubenswrapper[4806]: E1125 15:52:08.452290 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="registry-server" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452297 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="registry-server" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452685 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ac30dde-ccba-4cb3-a2e4-540d47610c83" containerName="tempest-tests-tempest-tests-runner" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.452706 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="7741a4b6-d6e3-43f3-8b11-c466de1cdedf" containerName="registry-server" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.453687 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.456013 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-66pxf" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.462543 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.596431 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.596529 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2whhn\" (UniqueName: \"kubernetes.io/projected/91a15fb4-157c-42c7-b66c-107db1dcd4cf-kube-api-access-2whhn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.700091 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.700177 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2whhn\" (UniqueName: \"kubernetes.io/projected/91a15fb4-157c-42c7-b66c-107db1dcd4cf-kube-api-access-2whhn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.700617 4806 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.725434 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2whhn\" (UniqueName: \"kubernetes.io/projected/91a15fb4-157c-42c7-b66c-107db1dcd4cf-kube-api-access-2whhn\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.730054 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"91a15fb4-157c-42c7-b66c-107db1dcd4cf\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:08 crc kubenswrapper[4806]: I1125 15:52:08.791616 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 25 15:52:09 crc kubenswrapper[4806]: I1125 15:52:09.274058 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 25 15:52:10 crc kubenswrapper[4806]: I1125 15:52:10.105074 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"91a15fb4-157c-42c7-b66c-107db1dcd4cf","Type":"ContainerStarted","Data":"51474e41ef4bf6f8ed7978e52cc09409deccda34deacfe2664bc4d75d74fab07"} Nov 25 15:52:11 crc kubenswrapper[4806]: I1125 15:52:11.107986 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"91a15fb4-157c-42c7-b66c-107db1dcd4cf","Type":"ContainerStarted","Data":"815994f9c0a5311d4298a23700790d485a8a0fd8c2be17eb45c6332f6a888659"} Nov 25 15:52:11 crc kubenswrapper[4806]: I1125 15:52:11.125667 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.353128367 podStartE2EDuration="3.125650282s" podCreationTimestamp="2025-11-25 15:52:08 +0000 UTC" firstStartedPulling="2025-11-25 15:52:09.278339521 +0000 UTC m=+3561.930481932" lastFinishedPulling="2025-11-25 15:52:10.050861436 +0000 UTC m=+3562.703003847" observedRunningTime="2025-11-25 15:52:11.121221802 +0000 UTC m=+3563.773364213" watchObservedRunningTime="2025-11-25 15:52:11.125650282 +0000 UTC m=+3563.777792694" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.600121 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2r2q/must-gather-gfgnl"] Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.606067 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.611568 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2r2q"/"kube-root-ca.crt" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.618625 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2r2q"/"openshift-service-ca.crt" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.621398 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2r2q/must-gather-gfgnl"] Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.682449 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.682568 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqzd\" (UniqueName: \"kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.785014 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.785181 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqzd\" (UniqueName: \"kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.786018 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.820251 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqzd\" (UniqueName: \"kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd\") pod \"must-gather-gfgnl\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:35 crc kubenswrapper[4806]: I1125 15:52:35.928667 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:52:36 crc kubenswrapper[4806]: I1125 15:52:36.479283 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-z2r2q/must-gather-gfgnl"] Nov 25 15:52:37 crc kubenswrapper[4806]: I1125 15:52:37.418062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" event={"ID":"f39c0ac4-6fb1-4d27-adfc-230efd634178","Type":"ContainerStarted","Data":"9b50df1a270787845679c4c51d1c7e1dd16744343287e2cd347cd26e97daa200"} Nov 25 15:52:41 crc kubenswrapper[4806]: I1125 15:52:41.477289 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" event={"ID":"f39c0ac4-6fb1-4d27-adfc-230efd634178","Type":"ContainerStarted","Data":"a23f99ad27cbfa6738f7222776577df1f2bb2049d08451204af04f2d3a37a243"} Nov 25 15:52:41 crc kubenswrapper[4806]: I1125 15:52:41.477874 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" event={"ID":"f39c0ac4-6fb1-4d27-adfc-230efd634178","Type":"ContainerStarted","Data":"f6d36f284fa1650c5094c401a8d483a728514124d1813bdc4bef6113669223ec"} Nov 25 15:52:41 crc kubenswrapper[4806]: I1125 15:52:41.493917 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" podStartSLOduration=2.159023863 podStartE2EDuration="6.493895172s" podCreationTimestamp="2025-11-25 15:52:35 +0000 UTC" firstStartedPulling="2025-11-25 15:52:36.489977212 +0000 UTC m=+3589.142119623" lastFinishedPulling="2025-11-25 15:52:40.824848521 +0000 UTC m=+3593.476990932" observedRunningTime="2025-11-25 15:52:41.492013301 +0000 UTC m=+3594.144155722" watchObservedRunningTime="2025-11-25 15:52:41.493895172 +0000 UTC m=+3594.146037583" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.209433 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-8qfxw"] Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.213051 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.215431 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z2r2q"/"default-dockercfg-6dsq7" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.336030 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb2q2\" (UniqueName: \"kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.336752 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.438987 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.439053 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb2q2\" (UniqueName: \"kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.439142 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.457252 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb2q2\" (UniqueName: \"kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2\") pod \"crc-debug-8qfxw\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:45 crc kubenswrapper[4806]: I1125 15:52:45.535217 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:52:46 crc kubenswrapper[4806]: I1125 15:52:46.528910 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" event={"ID":"9a5dd043-28dc-46ae-bdd5-7c09fd068626","Type":"ContainerStarted","Data":"2569ecb323908658d7ec9e31e21afa1d32ef72efa74558e717c8c57c7336bd23"} Nov 25 15:52:48 crc kubenswrapper[4806]: I1125 15:52:48.934955 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:52:48 crc kubenswrapper[4806]: I1125 15:52:48.935679 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:53:01 crc kubenswrapper[4806]: E1125 15:53:01.346950 4806 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 25 15:53:01 crc kubenswrapper[4806]: E1125 15:53:01.347619 4806 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rb2q2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-8qfxw_openshift-must-gather-z2r2q(9a5dd043-28dc-46ae-bdd5-7c09fd068626): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:53:01 crc kubenswrapper[4806]: E1125 15:53:01.348808 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" Nov 25 15:53:01 crc kubenswrapper[4806]: E1125 15:53:01.702035 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" Nov 25 15:53:15 crc kubenswrapper[4806]: I1125 15:53:15.825389 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" event={"ID":"9a5dd043-28dc-46ae-bdd5-7c09fd068626","Type":"ContainerStarted","Data":"3a4a0ad35eb618fd1588fb328ed113501aa7d824216014f6f3bf930331b2ce5b"} Nov 25 15:53:18 crc kubenswrapper[4806]: I1125 15:53:18.935118 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:53:18 crc kubenswrapper[4806]: I1125 15:53:18.935556 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:53:48 crc kubenswrapper[4806]: I1125 15:53:48.934984 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:53:48 crc kubenswrapper[4806]: I1125 15:53:48.936037 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:53:48 crc kubenswrapper[4806]: I1125 15:53:48.936114 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:53:48 crc kubenswrapper[4806]: I1125 15:53:48.937581 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:53:48 crc kubenswrapper[4806]: I1125 15:53:48.937792 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e" gracePeriod=600 Nov 25 15:53:49 crc kubenswrapper[4806]: I1125 15:53:49.205946 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e" exitCode=0 Nov 25 15:53:49 crc kubenswrapper[4806]: I1125 15:53:49.206006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e"} Nov 25 15:53:49 crc kubenswrapper[4806]: I1125 15:53:49.206056 4806 scope.go:117] "RemoveContainer" containerID="c5665d24da59a69058ea2c9b904dc059808ec3dec416e24bf589327eb7f765c5" Nov 25 15:53:50 crc kubenswrapper[4806]: I1125 15:53:50.219151 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de"} Nov 25 15:53:50 crc kubenswrapper[4806]: I1125 15:53:50.245867 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" podStartSLOduration=36.114258914 podStartE2EDuration="1m5.245846869s" podCreationTimestamp="2025-11-25 15:52:45 +0000 UTC" firstStartedPulling="2025-11-25 15:52:45.58328092 +0000 UTC m=+3598.235423331" lastFinishedPulling="2025-11-25 15:53:14.714868875 +0000 UTC m=+3627.367011286" observedRunningTime="2025-11-25 15:53:15.847535672 +0000 UTC m=+3628.499678083" watchObservedRunningTime="2025-11-25 15:53:50.245846869 +0000 UTC m=+3662.897989280" Nov 25 15:54:07 crc kubenswrapper[4806]: I1125 15:54:07.396301 4806 generic.go:334] "Generic (PLEG): container finished" podID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" containerID="3a4a0ad35eb618fd1588fb328ed113501aa7d824216014f6f3bf930331b2ce5b" exitCode=0 Nov 25 15:54:07 crc kubenswrapper[4806]: I1125 15:54:07.396388 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" event={"ID":"9a5dd043-28dc-46ae-bdd5-7c09fd068626","Type":"ContainerDied","Data":"3a4a0ad35eb618fd1588fb328ed113501aa7d824216014f6f3bf930331b2ce5b"} Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.529452 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.572990 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-8qfxw"] Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.585467 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-8qfxw"] Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.721969 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host\") pod \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.722051 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb2q2\" (UniqueName: \"kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2\") pod \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\" (UID: \"9a5dd043-28dc-46ae-bdd5-7c09fd068626\") " Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.723513 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host" (OuterVolumeSpecName: "host") pod "9a5dd043-28dc-46ae-bdd5-7c09fd068626" (UID: "9a5dd043-28dc-46ae-bdd5-7c09fd068626"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.732312 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2" (OuterVolumeSpecName: "kube-api-access-rb2q2") pod "9a5dd043-28dc-46ae-bdd5-7c09fd068626" (UID: "9a5dd043-28dc-46ae-bdd5-7c09fd068626"). InnerVolumeSpecName "kube-api-access-rb2q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.824786 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9a5dd043-28dc-46ae-bdd5-7c09fd068626-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:08 crc kubenswrapper[4806]: I1125 15:54:08.824821 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb2q2\" (UniqueName: \"kubernetes.io/projected/9a5dd043-28dc-46ae-bdd5-7c09fd068626-kube-api-access-rb2q2\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.422518 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2569ecb323908658d7ec9e31e21afa1d32ef72efa74558e717c8c57c7336bd23" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.422609 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-8qfxw" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.804424 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-rkltx"] Nov 25 15:54:09 crc kubenswrapper[4806]: E1125 15:54:09.805092 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" containerName="container-00" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.805108 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" containerName="container-00" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.805337 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" containerName="container-00" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.806063 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.808774 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z2r2q"/"default-dockercfg-6dsq7" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.946566 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-684q6\" (UniqueName: \"kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:09 crc kubenswrapper[4806]: I1125 15:54:09.946629 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.049643 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-684q6\" (UniqueName: \"kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.050190 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.050348 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.068723 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-684q6\" (UniqueName: \"kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6\") pod \"crc-debug-rkltx\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.101646 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a5dd043-28dc-46ae-bdd5-7c09fd068626" path="/var/lib/kubelet/pods/9a5dd043-28dc-46ae-bdd5-7c09fd068626/volumes" Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.124169 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:10 crc kubenswrapper[4806]: W1125 15:54:10.159685 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod401f46eb_dcbd_445c_8d8d_a84c3d6f83db.slice/crio-d4d8141c43201ff5e73d61515b29f5280d2fa304596116d607adf55f943608be WatchSource:0}: Error finding container d4d8141c43201ff5e73d61515b29f5280d2fa304596116d607adf55f943608be: Status 404 returned error can't find the container with id d4d8141c43201ff5e73d61515b29f5280d2fa304596116d607adf55f943608be Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.436988 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" event={"ID":"401f46eb-dcbd-445c-8d8d-a84c3d6f83db","Type":"ContainerStarted","Data":"755da102609b5c0aee43723c833f6faab5d59a54dcb3ddd9c27202469632803d"} Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.437493 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" event={"ID":"401f46eb-dcbd-445c-8d8d-a84c3d6f83db","Type":"ContainerStarted","Data":"d4d8141c43201ff5e73d61515b29f5280d2fa304596116d607adf55f943608be"} Nov 25 15:54:10 crc kubenswrapper[4806]: I1125 15:54:10.452101 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" podStartSLOduration=1.4520840210000001 podStartE2EDuration="1.452084021s" podCreationTimestamp="2025-11-25 15:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:54:10.45130512 +0000 UTC m=+3683.103447531" watchObservedRunningTime="2025-11-25 15:54:10.452084021 +0000 UTC m=+3683.104226432" Nov 25 15:54:11 crc kubenswrapper[4806]: I1125 15:54:11.448236 4806 generic.go:334] "Generic (PLEG): container finished" podID="401f46eb-dcbd-445c-8d8d-a84c3d6f83db" containerID="755da102609b5c0aee43723c833f6faab5d59a54dcb3ddd9c27202469632803d" exitCode=0 Nov 25 15:54:11 crc kubenswrapper[4806]: I1125 15:54:11.448326 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" event={"ID":"401f46eb-dcbd-445c-8d8d-a84c3d6f83db","Type":"ContainerDied","Data":"755da102609b5c0aee43723c833f6faab5d59a54dcb3ddd9c27202469632803d"} Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.602412 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.649880 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-rkltx"] Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.659259 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-rkltx"] Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.704574 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host\") pod \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.704708 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-684q6\" (UniqueName: \"kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6\") pod \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\" (UID: \"401f46eb-dcbd-445c-8d8d-a84c3d6f83db\") " Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.705710 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host" (OuterVolumeSpecName: "host") pod "401f46eb-dcbd-445c-8d8d-a84c3d6f83db" (UID: "401f46eb-dcbd-445c-8d8d-a84c3d6f83db"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.713445 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6" (OuterVolumeSpecName: "kube-api-access-684q6") pod "401f46eb-dcbd-445c-8d8d-a84c3d6f83db" (UID: "401f46eb-dcbd-445c-8d8d-a84c3d6f83db"). InnerVolumeSpecName "kube-api-access-684q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.806923 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-684q6\" (UniqueName: \"kubernetes.io/projected/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-kube-api-access-684q6\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:12 crc kubenswrapper[4806]: I1125 15:54:12.806970 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/401f46eb-dcbd-445c-8d8d-a84c3d6f83db-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.467772 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d8141c43201ff5e73d61515b29f5280d2fa304596116d607adf55f943608be" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.467830 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-rkltx" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.851266 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-nlnll"] Nov 25 15:54:13 crc kubenswrapper[4806]: E1125 15:54:13.852015 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401f46eb-dcbd-445c-8d8d-a84c3d6f83db" containerName="container-00" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.852030 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="401f46eb-dcbd-445c-8d8d-a84c3d6f83db" containerName="container-00" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.852225 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="401f46eb-dcbd-445c-8d8d-a84c3d6f83db" containerName="container-00" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.852938 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:13 crc kubenswrapper[4806]: I1125 15:54:13.854893 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-z2r2q"/"default-dockercfg-6dsq7" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.031297 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.032350 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96wtp\" (UniqueName: \"kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.106504 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401f46eb-dcbd-445c-8d8d-a84c3d6f83db" path="/var/lib/kubelet/pods/401f46eb-dcbd-445c-8d8d-a84c3d6f83db/volumes" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.134537 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.134661 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96wtp\" (UniqueName: \"kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.134716 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.157420 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96wtp\" (UniqueName: \"kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp\") pod \"crc-debug-nlnll\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.171526 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:14 crc kubenswrapper[4806]: W1125 15:54:14.205505 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82fdad0b_4a96_4303_b316_d36dfd3ccf74.slice/crio-a52f74bda1aab1bdc1cde06137c957dfddc48cbfe7bff27d7e0aca7d1c24c88a WatchSource:0}: Error finding container a52f74bda1aab1bdc1cde06137c957dfddc48cbfe7bff27d7e0aca7d1c24c88a: Status 404 returned error can't find the container with id a52f74bda1aab1bdc1cde06137c957dfddc48cbfe7bff27d7e0aca7d1c24c88a Nov 25 15:54:14 crc kubenswrapper[4806]: I1125 15:54:14.479096 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" event={"ID":"82fdad0b-4a96-4303-b316-d36dfd3ccf74","Type":"ContainerStarted","Data":"a52f74bda1aab1bdc1cde06137c957dfddc48cbfe7bff27d7e0aca7d1c24c88a"} Nov 25 15:54:15 crc kubenswrapper[4806]: I1125 15:54:15.494157 4806 generic.go:334] "Generic (PLEG): container finished" podID="82fdad0b-4a96-4303-b316-d36dfd3ccf74" containerID="e3ba245da4accb6d730a820b33c1d907f2a572c1add979b6bba9aa3159a27dd1" exitCode=0 Nov 25 15:54:15 crc kubenswrapper[4806]: I1125 15:54:15.494212 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" event={"ID":"82fdad0b-4a96-4303-b316-d36dfd3ccf74","Type":"ContainerDied","Data":"e3ba245da4accb6d730a820b33c1d907f2a572c1add979b6bba9aa3159a27dd1"} Nov 25 15:54:15 crc kubenswrapper[4806]: I1125 15:54:15.545487 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-nlnll"] Nov 25 15:54:15 crc kubenswrapper[4806]: I1125 15:54:15.553792 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2r2q/crc-debug-nlnll"] Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.648817 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.791542 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96wtp\" (UniqueName: \"kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp\") pod \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.791722 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host\") pod \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\" (UID: \"82fdad0b-4a96-4303-b316-d36dfd3ccf74\") " Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.791864 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host" (OuterVolumeSpecName: "host") pod "82fdad0b-4a96-4303-b316-d36dfd3ccf74" (UID: "82fdad0b-4a96-4303-b316-d36dfd3ccf74"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.792509 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/82fdad0b-4a96-4303-b316-d36dfd3ccf74-host\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.802549 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp" (OuterVolumeSpecName: "kube-api-access-96wtp") pod "82fdad0b-4a96-4303-b316-d36dfd3ccf74" (UID: "82fdad0b-4a96-4303-b316-d36dfd3ccf74"). InnerVolumeSpecName "kube-api-access-96wtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:54:16 crc kubenswrapper[4806]: I1125 15:54:16.895038 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96wtp\" (UniqueName: \"kubernetes.io/projected/82fdad0b-4a96-4303-b316-d36dfd3ccf74-kube-api-access-96wtp\") on node \"crc\" DevicePath \"\"" Nov 25 15:54:17 crc kubenswrapper[4806]: I1125 15:54:17.516069 4806 scope.go:117] "RemoveContainer" containerID="e3ba245da4accb6d730a820b33c1d907f2a572c1add979b6bba9aa3159a27dd1" Nov 25 15:54:17 crc kubenswrapper[4806]: I1125 15:54:17.516122 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/crc-debug-nlnll" Nov 25 15:54:18 crc kubenswrapper[4806]: I1125 15:54:18.103717 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82fdad0b-4a96-4303-b316-d36dfd3ccf74" path="/var/lib/kubelet/pods/82fdad0b-4a96-4303-b316-d36dfd3ccf74/volumes" Nov 25 15:54:41 crc kubenswrapper[4806]: I1125 15:54:41.827708 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/init-config-reloader/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.116666 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/alertmanager/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.139180 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/config-reloader/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.143250 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/init-config-reloader/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.593357 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b5fbf57f8-jxhqp_cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81/barbican-api/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.597772 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b5fbf57f8-jxhqp_cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81/barbican-api-log/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.738950 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc7bb5d48-xzkml_322cf975-d195-44f0-b652-909080e6c2f2/barbican-keystone-listener/0.log" Nov 25 15:54:42 crc kubenswrapper[4806]: I1125 15:54:42.994026 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc7bb5d48-xzkml_322cf975-d195-44f0-b652-909080e6c2f2/barbican-keystone-listener-log/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.036415 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66468c84c9-dpswk_9cc24510-0ee6-451a-ae1e-6c057d860972/barbican-worker/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.048835 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66468c84c9-dpswk_9cc24510-0ee6-451a-ae1e-6c057d860972/barbican-worker-log/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.256795 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt_1e02aa69-d4ed-4a30-8c3f-2fe2021298d1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.354343 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/ceilometer-central-agent/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.488857 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/proxy-httpd/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.505409 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/ceilometer-notification-agent/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.526176 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/sg-core/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.742537 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d875dfe1-f943-4577-afd4-e301920efac6/cinder-api-log/0.log" Nov 25 15:54:43 crc kubenswrapper[4806]: I1125 15:54:43.769182 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d875dfe1-f943-4577-afd4-e301920efac6/cinder-api/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.044478 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a6efd5be-f7be-4981-aa85-710e9a0b3dc7/probe/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.054481 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a6efd5be-f7be-4981-aa85-710e9a0b3dc7/cinder-scheduler/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.239993 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_e447777b-718e-4152-a9ac-9f6d8885345f/cloudkitty-api/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.316346 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_e447777b-718e-4152-a9ac-9f6d8885345f/cloudkitty-api-log/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.404179 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_b6ecb712-3cf0-4cd4-b823-0ffd452437ce/loki-compactor/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.574664 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-56cd74f89f-bs2h7_4c17fab0-86a8-4e8b-b790-c0a9c91979a3/loki-distributor/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.676961 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-76cc998948-fxwbg_a1a1861d-9755-4f0b-8644-37e0e35584e1/gateway/0.log" Nov 25 15:54:44 crc kubenswrapper[4806]: I1125 15:54:44.829285 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-76cc998948-gbg2h_1b3c25ba-4426-45b4-8f79-95fd0e07823b/gateway/0.log" Nov 25 15:54:45 crc kubenswrapper[4806]: I1125 15:54:45.050582 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_b61e9f82-3559-4710-8b06-4bc2c5997224/loki-index-gateway/0.log" Nov 25 15:54:45 crc kubenswrapper[4806]: I1125 15:54:45.507249 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_cdc49832-6f51-4954-ab25-3f84f6956d1f/loki-ingester/0.log" Nov 25 15:54:45 crc kubenswrapper[4806]: I1125 15:54:45.796573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-779849886d-mzf6h_f0dc94d5-1470-40f4-8969-84c9690164c8/loki-query-frontend/0.log" Nov 25 15:54:45 crc kubenswrapper[4806]: I1125 15:54:45.872376 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-548665d79b-vt8jx_39c749dc-99ca-45d4-b49a-3e8925e0230a/loki-querier/0.log" Nov 25 15:54:46 crc kubenswrapper[4806]: I1125 15:54:46.302128 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w_5ab11811-773f-477f-bb49-59c8dacf771f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:46 crc kubenswrapper[4806]: I1125 15:54:46.466743 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4_9c0f0294-9956-4bf5-a1c3-2f7010c70008/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:46 crc kubenswrapper[4806]: I1125 15:54:46.558147 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/init/0.log" Nov 25 15:54:46 crc kubenswrapper[4806]: I1125 15:54:46.868191 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/init/0.log" Nov 25 15:54:46 crc kubenswrapper[4806]: I1125 15:54:46.982190 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj_e47040af-0961-465d-a57d-b5a86d51d814/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.003874 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/dnsmasq-dns/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.303204 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_125263e2-6d79-4c36-be67-2dd333e3dff5/glance-log/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.328188 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_125263e2-6d79-4c36-be67-2dd333e3dff5/glance-httpd/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.630458 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_314b444d-00a5-4e80-bc69-07ae78a84ad8/glance-log/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.695573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_314b444d-00a5-4e80-bc69-07ae78a84ad8/glance-httpd/0.log" Nov 25 15:54:47 crc kubenswrapper[4806]: I1125 15:54:47.762930 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l96zm_d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:48 crc kubenswrapper[4806]: I1125 15:54:48.011276 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dt7mk_5874b1c9-f997-4c96-b5a4-b012416932ba/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:48 crc kubenswrapper[4806]: I1125 15:54:48.501903 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9c050b95-eb84-4171-a52c-ee1e4614c301/kube-state-metrics/0.log" Nov 25 15:54:48 crc kubenswrapper[4806]: I1125 15:54:48.631903 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8486684b84-snnmc_73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5/keystone-api/0.log" Nov 25 15:54:48 crc kubenswrapper[4806]: I1125 15:54:48.778431 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gdntk_63e0c8ca-cbfc-476a-b68a-00b39c2a7a47/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.004789 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_69ec7b50-f06b-4a12-8c24-8781116d0604/cloudkitty-proc/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.279813 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5546966469-bclkx_5c1bd1be-9aa3-4444-a30c-1a3926c79b49/neutron-api/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.295985 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s_5b01cee4-68ad-4117-9841-8dea2142524a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.323365 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5546966469-bclkx_5c1bd1be-9aa3-4444-a30c-1a3926c79b49/neutron-httpd/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.891988 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251/nova-api-log/0.log" Nov 25 15:54:49 crc kubenswrapper[4806]: I1125 15:54:49.997649 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b/nova-cell0-conductor-conductor/0.log" Nov 25 15:54:50 crc kubenswrapper[4806]: I1125 15:54:50.118003 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251/nova-api-api/0.log" Nov 25 15:54:50 crc kubenswrapper[4806]: I1125 15:54:50.261132 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d3f3eddf-31e1-4923-b0e1-1245f37ea5b8/nova-cell1-conductor-conductor/0.log" Nov 25 15:54:50 crc kubenswrapper[4806]: I1125 15:54:50.325306 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f96a2277-fc94-465c-beae-9461e69ef4e3/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 15:54:50 crc kubenswrapper[4806]: I1125 15:54:50.573356 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qvk7r_dc945807-33cb-4f78-9fed-c65adc25aeef/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:50 crc kubenswrapper[4806]: I1125 15:54:50.709576 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a41e572-3193-4163-81ab-e3ee7b072461/nova-metadata-log/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.303221 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6705187-ba84-405e-9d7a-6e3b97e1b9f3/nova-scheduler-scheduler/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.371925 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/mysql-bootstrap/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.571059 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/mysql-bootstrap/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.637670 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/galera/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.838179 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/mysql-bootstrap/0.log" Nov 25 15:54:51 crc kubenswrapper[4806]: I1125 15:54:51.898993 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a41e572-3193-4163-81ab-e3ee7b072461/nova-metadata-metadata/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.076214 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/mysql-bootstrap/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.127732 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3e62db5f-8827-474f-9dc5-654aaa347996/openstackclient/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.165063 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/galera/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.396045 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dhcsq_cb8eb50b-2bea-43d0-b0b6-698bc3709b1d/openstack-network-exporter/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.403494 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l6mv2_c90d07c6-4f04-48d1-ae1f-bb15f60ba44b/ovn-controller/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.715126 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server-init/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.910099 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server-init/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.940255 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server/0.log" Nov 25 15:54:52 crc kubenswrapper[4806]: I1125 15:54:52.979293 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovs-vswitchd/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.183136 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xjxhn_69414d23-6d19-459c-8930-73ad33dd73e5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.301636 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fb15262a-cd0a-45e1-b1c4-9d5221f2e707/openstack-network-exporter/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.302236 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fb15262a-cd0a-45e1-b1c4-9d5221f2e707/ovn-northd/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.534573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec42948f-25cf-4ae0-8553-dfd5dcc43021/openstack-network-exporter/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.567460 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec42948f-25cf-4ae0-8553-dfd5dcc43021/ovsdbserver-nb/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.770119 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2235e648-6ec4-4d98-a879-46f4f56b93e0/openstack-network-exporter/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.873816 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2235e648-6ec4-4d98-a879-46f4f56b93e0/ovsdbserver-sb/0.log" Nov 25 15:54:53 crc kubenswrapper[4806]: I1125 15:54:53.933834 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c84b48b46-vlp89_fac79279-6dad-4f14-8e06-4d705d8f552d/placement-api/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.151373 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c84b48b46-vlp89_fac79279-6dad-4f14-8e06-4d705d8f552d/placement-log/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.213527 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/init-config-reloader/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.458798 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/init-config-reloader/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.530047 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/config-reloader/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.536430 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/thanos-sidecar/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.560361 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/prometheus/0.log" Nov 25 15:54:54 crc kubenswrapper[4806]: I1125 15:54:54.936563 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/setup-container/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.216117 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/setup-container/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.259696 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/setup-container/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.268936 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/rabbitmq/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.543298 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/rabbitmq/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.560700 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/setup-container/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.562043 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk_2f849708-31fc-45af-8eb8-75bd30094be9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.783920 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5hk27_4a338892-2bb8-41bf-aae0-d726d31e76b3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:55 crc kubenswrapper[4806]: I1125 15:54:55.867498 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5_2cd3c61a-f9b2-4746-ba1d-226aea23d908/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.118567 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dtz44_6ab72e48-ad31-4614-a3a0-44f0dd9762a9/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.209115 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qbtlz_0d16f874-9406-497e-ad89-6e5ce5c109f5/ssh-known-hosts-edpm-deployment/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.431434 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d6dfc6f67-wrhhk_3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0/proxy-server/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.526704 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d6dfc6f67-wrhhk_3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0/proxy-httpd/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.698788 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wpqhp_998fc00a-139c-4c9a-9765-a445527be5aa/swift-ring-rebalance/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.809267 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-auditor/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.887637 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-reaper/0.log" Nov 25 15:54:56 crc kubenswrapper[4806]: I1125 15:54:56.974009 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-replicator/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.051663 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-server/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.092052 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-auditor/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.185002 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-replicator/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.208077 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-server/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.301139 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-updater/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.348615 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-auditor/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.440364 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-expirer/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.536172 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-server/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.541630 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-replicator/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.546556 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-updater/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.707649 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/rsync/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.906093 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4_6e3bb0ce-18a1-49d0-aff6-4d45985913a6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:54:57 crc kubenswrapper[4806]: I1125 15:54:57.925242 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/swift-recon-cron/0.log" Nov 25 15:54:58 crc kubenswrapper[4806]: I1125 15:54:58.172304 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2ac30dde-ccba-4cb3-a2e4-540d47610c83/tempest-tests-tempest-tests-runner/0.log" Nov 25 15:54:58 crc kubenswrapper[4806]: I1125 15:54:58.177127 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_91a15fb4-157c-42c7-b66c-107db1dcd4cf/test-operator-logs-container/0.log" Nov 25 15:54:58 crc kubenswrapper[4806]: I1125 15:54:58.433512 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-59ddg_dc9534cb-ed46-40c5-918b-d20679144d6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 15:55:03 crc kubenswrapper[4806]: I1125 15:55:03.854607 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_31cd92ea-0a03-4883-9d96-532a9d5c3bd0/memcached/0.log" Nov 25 15:55:26 crc kubenswrapper[4806]: I1125 15:55:26.641632 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-qk9m2_537dc134-0732-4dfc-b0be-9c16d3d191be/kube-rbac-proxy/0.log" Nov 25 15:55:26 crc kubenswrapper[4806]: I1125 15:55:26.771467 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-qk9m2_537dc134-0732-4dfc-b0be-9c16d3d191be/manager/0.log" Nov 25 15:55:26 crc kubenswrapper[4806]: I1125 15:55:26.998812 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-w6686_40a580de-1093-4adc-a98c-e18202bee9e3/kube-rbac-proxy/0.log" Nov 25 15:55:27 crc kubenswrapper[4806]: I1125 15:55:27.131091 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-w6686_40a580de-1093-4adc-a98c-e18202bee9e3/manager/0.log" Nov 25 15:55:27 crc kubenswrapper[4806]: I1125 15:55:27.206785 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 15:55:27 crc kubenswrapper[4806]: I1125 15:55:27.753205 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.029774 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.098060 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.336418 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.355908 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/extract/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.376724 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.555259 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wfsxk_de253966-f7ff-485f-8108-b8ee0fd795bf/kube-rbac-proxy/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.565564 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wfsxk_de253966-f7ff-485f-8108-b8ee0fd795bf/manager/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.725336 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r8dnj_fbf78fa8-8b88-454e-a7dc-0e75f463bc45/kube-rbac-proxy/0.log" Nov 25 15:55:28 crc kubenswrapper[4806]: I1125 15:55:28.903277 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r8dnj_fbf78fa8-8b88-454e-a7dc-0e75f463bc45/manager/0.log" Nov 25 15:55:29 crc kubenswrapper[4806]: I1125 15:55:29.374080 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-jcrbm_8294cfe0-6c14-49bc-bd5b-d614a68893ce/kube-rbac-proxy/0.log" Nov 25 15:55:29 crc kubenswrapper[4806]: I1125 15:55:29.736266 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-jcrbm_8294cfe0-6c14-49bc-bd5b-d614a68893ce/manager/0.log" Nov 25 15:55:29 crc kubenswrapper[4806]: I1125 15:55:29.818999 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-h9qg8_461ceb26-b86c-4bb8-9550-131351dfa3e5/kube-rbac-proxy/0.log" Nov 25 15:55:29 crc kubenswrapper[4806]: I1125 15:55:29.932050 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-h9qg8_461ceb26-b86c-4bb8-9550-131351dfa3e5/manager/0.log" Nov 25 15:55:30 crc kubenswrapper[4806]: I1125 15:55:30.016223 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-xlzgr_e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329/kube-rbac-proxy/0.log" Nov 25 15:55:30 crc kubenswrapper[4806]: I1125 15:55:30.281287 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-xlzgr_e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329/manager/0.log" Nov 25 15:55:30 crc kubenswrapper[4806]: I1125 15:55:30.377743 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-q6z52_ec8a3bcc-2127-44bc-8f89-db3ece24a9b9/kube-rbac-proxy/0.log" Nov 25 15:55:30 crc kubenswrapper[4806]: I1125 15:55:30.615070 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-q6z52_ec8a3bcc-2127-44bc-8f89-db3ece24a9b9/manager/0.log" Nov 25 15:55:30 crc kubenswrapper[4806]: I1125 15:55:30.961304 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-w5r5m_61457634-dc4d-4ad9-9bdc-c95aae5df022/kube-rbac-proxy/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.020713 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-w5r5m_61457634-dc4d-4ad9-9bdc-c95aae5df022/manager/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.234773 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-bwwh4_9cc0ebc5-e3d4-4bae-8b33-032d950705ff/kube-rbac-proxy/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.294348 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-bwwh4_9cc0ebc5-e3d4-4bae-8b33-032d950705ff/manager/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.392946 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-9thxp_c1159ae9-b734-4012-b746-35d037ee4817/kube-rbac-proxy/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.489815 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-9thxp_c1159ae9-b734-4012-b746-35d037ee4817/manager/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.583856 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c5xhr_d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7/kube-rbac-proxy/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.706414 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c5xhr_d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7/manager/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.811235 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-wfhhn_63efe3dc-03df-4494-9661-9a23a89c0974/kube-rbac-proxy/0.log" Nov 25 15:55:31 crc kubenswrapper[4806]: I1125 15:55:31.894409 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-wfhhn_63efe3dc-03df-4494-9661-9a23a89c0974/manager/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.016990 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cqwgq_2a080dd6-0904-4756-8b02-39d10465fea2/kube-rbac-proxy/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.055756 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cqwgq_2a080dd6-0904-4756-8b02-39d10465fea2/manager/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.303864 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g_b3220f94-14c9-4820-9d1b-6b4bb1b635fd/kube-rbac-proxy/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.325245 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g_b3220f94-14c9-4820-9d1b-6b4bb1b635fd/manager/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.909253 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-779bfcf6cb-zxvzf_8fe87500-5164-48de-a495-f6d74b05b7f9/operator/0.log" Nov 25 15:55:32 crc kubenswrapper[4806]: I1125 15:55:32.991930 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-csjwd_54ffd9a7-4d3c-4e19-855a-8f54e7d9d513/registry-server/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.090041 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tzsbk_9dc1bbe2-49c1-4601-9acf-b1887426fdd0/kube-rbac-proxy/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.250910 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tzsbk_9dc1bbe2-49c1-4601-9acf-b1887426fdd0/manager/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.400984 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-fxzwv_24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b/kube-rbac-proxy/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.482777 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-fxzwv_24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b/manager/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.593968 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2snr9_fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30/operator/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.768376 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pxx5w_1df7970b-bed8-4e27-b04b-66e513683875/kube-rbac-proxy/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.863756 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pxx5w_1df7970b-bed8-4e27-b04b-66e513683875/manager/0.log" Nov 25 15:55:33 crc kubenswrapper[4806]: I1125 15:55:33.985228 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-687f46fc78-xdmx6_dbedcc0b-12de-4497-a9f3-a9df6c88a74f/kube-rbac-proxy/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.008693 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7c468db9ff-2r8gr_b97ff802-8b8f-47d4-bff1-7d6876f780ff/manager/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.162339 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wnx44_4877ab9d-8cd3-4270-915f-c73167e93b49/kube-rbac-proxy/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.308103 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wnx44_4877ab9d-8cd3-4270-915f-c73167e93b49/manager/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.405070 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b7g79_023302d1-a345-4f55-9ac1-4a2b674e36aa/kube-rbac-proxy/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.417336 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-687f46fc78-xdmx6_dbedcc0b-12de-4497-a9f3-a9df6c88a74f/manager/0.log" Nov 25 15:55:34 crc kubenswrapper[4806]: I1125 15:55:34.475286 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b7g79_023302d1-a345-4f55-9ac1-4a2b674e36aa/manager/0.log" Nov 25 15:55:53 crc kubenswrapper[4806]: I1125 15:55:53.044084 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6hqx6_7f5cd5de-2e48-4c15-9c5e-f20368bc172b/control-plane-machine-set-operator/0.log" Nov 25 15:55:53 crc kubenswrapper[4806]: I1125 15:55:53.197794 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9tjs2_f394b01a-b495-4acf-bca9-0b23347a3358/kube-rbac-proxy/0.log" Nov 25 15:55:53 crc kubenswrapper[4806]: I1125 15:55:53.251301 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9tjs2_f394b01a-b495-4acf-bca9-0b23347a3358/machine-api-operator/0.log" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.173076 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 15:56:00 crc kubenswrapper[4806]: E1125 15:56:00.174069 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82fdad0b-4a96-4303-b316-d36dfd3ccf74" containerName="container-00" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.174083 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="82fdad0b-4a96-4303-b316-d36dfd3ccf74" containerName="container-00" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.174305 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="82fdad0b-4a96-4303-b316-d36dfd3ccf74" containerName="container-00" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.175091 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.177569 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.178383 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.182697 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.242939 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.243015 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.345096 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.345202 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.345370 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.364262 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:00 crc kubenswrapper[4806]: I1125 15:56:00.499822 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:01 crc kubenswrapper[4806]: I1125 15:56:01.092991 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 15:56:01 crc kubenswrapper[4806]: I1125 15:56:01.580248 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cd01ffb-391b-405a-8326-d13869e9be84","Type":"ContainerStarted","Data":"b87db43e0f711b39099b3067bbef6e65cf67b97daa65bd69407dc375e50e2fbf"} Nov 25 15:56:04 crc kubenswrapper[4806]: I1125 15:56:04.613572 4806 generic.go:334] "Generic (PLEG): container finished" podID="5cd01ffb-391b-405a-8326-d13869e9be84" containerID="93f231710ebe0479412dbabb98989cab0ec18dfffd871d45b55bc4b764dfa95a" exitCode=0 Nov 25 15:56:04 crc kubenswrapper[4806]: I1125 15:56:04.613622 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cd01ffb-391b-405a-8326-d13869e9be84","Type":"ContainerDied","Data":"93f231710ebe0479412dbabb98989cab0ec18dfffd871d45b55bc4b764dfa95a"} Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.174035 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.176227 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.189542 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.349269 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.349362 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.349464 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.451438 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.451489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.451568 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.451642 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.451668 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.479921 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access\") pod \"installer-9-crc\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:05 crc kubenswrapper[4806]: I1125 15:56:05.495090 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.107240 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.235943 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.384372 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir\") pod \"5cd01ffb-391b-405a-8326-d13869e9be84\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.384498 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5cd01ffb-391b-405a-8326-d13869e9be84" (UID: "5cd01ffb-391b-405a-8326-d13869e9be84"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.384515 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access\") pod \"5cd01ffb-391b-405a-8326-d13869e9be84\" (UID: \"5cd01ffb-391b-405a-8326-d13869e9be84\") " Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.385721 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cd01ffb-391b-405a-8326-d13869e9be84-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.390600 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5cd01ffb-391b-405a-8326-d13869e9be84" (UID: "5cd01ffb-391b-405a-8326-d13869e9be84"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.487835 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cd01ffb-391b-405a-8326-d13869e9be84-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.642700 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ae73747-62e9-4046-99b6-3ed9145be32b","Type":"ContainerStarted","Data":"0e537a790074eb497a5616b948ca1c52aff9ed821ce07c8eb32d28192b803ae0"} Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.646607 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5cd01ffb-391b-405a-8326-d13869e9be84","Type":"ContainerDied","Data":"b87db43e0f711b39099b3067bbef6e65cf67b97daa65bd69407dc375e50e2fbf"} Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.646636 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b87db43e0f711b39099b3067bbef6e65cf67b97daa65bd69407dc375e50e2fbf" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.646690 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 15:56:06 crc kubenswrapper[4806]: I1125 15:56:06.840296 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-2nhx4_95b3b0c2-b552-4f25-803e-f2ae9d53add8/cert-manager-controller/0.log" Nov 25 15:56:07 crc kubenswrapper[4806]: I1125 15:56:07.086006 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mw4xn_9914c048-9845-4535-97d5-2833b53b84d3/cert-manager-cainjector/0.log" Nov 25 15:56:07 crc kubenswrapper[4806]: I1125 15:56:07.214418 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jssct_672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae/cert-manager-webhook/0.log" Nov 25 15:56:07 crc kubenswrapper[4806]: I1125 15:56:07.658138 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ae73747-62e9-4046-99b6-3ed9145be32b","Type":"ContainerStarted","Data":"8e306fa1822ffa7a3c5510d5cf4aface9349eed63d3c18139d8d117b39291f5b"} Nov 25 15:56:07 crc kubenswrapper[4806]: I1125 15:56:07.673890 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.6738679039999997 podStartE2EDuration="2.673867904s" podCreationTimestamp="2025-11-25 15:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:56:07.670153163 +0000 UTC m=+3800.322295574" watchObservedRunningTime="2025-11-25 15:56:07.673867904 +0000 UTC m=+3800.326010325" Nov 25 15:56:18 crc kubenswrapper[4806]: I1125 15:56:18.935258 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:56:18 crc kubenswrapper[4806]: I1125 15:56:18.935795 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:56:19 crc kubenswrapper[4806]: I1125 15:56:19.696035 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-glshj_d7da5810-18e1-4ece-a8d1-a3a7f9c710a4/nmstate-console-plugin/0.log" Nov 25 15:56:19 crc kubenswrapper[4806]: I1125 15:56:19.900972 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8n9rx_ef57a24c-25d4-481a-8047-af60faef1f37/nmstate-handler/0.log" Nov 25 15:56:19 crc kubenswrapper[4806]: I1125 15:56:19.924464 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-b4tpl_58a03ccb-63cd-45fe-bc04-71fcc12c3434/kube-rbac-proxy/0.log" Nov 25 15:56:19 crc kubenswrapper[4806]: I1125 15:56:19.967302 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-b4tpl_58a03ccb-63cd-45fe-bc04-71fcc12c3434/nmstate-metrics/0.log" Nov 25 15:56:20 crc kubenswrapper[4806]: I1125 15:56:20.110377 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-b2jcn_63efa58c-1fdc-46b7-ba63-94effc1543d0/nmstate-operator/0.log" Nov 25 15:56:20 crc kubenswrapper[4806]: I1125 15:56:20.139163 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-n8ld5_831b49c5-f5fa-4186-8bd0-25b5a3e76a45/nmstate-webhook/0.log" Nov 25 15:56:33 crc kubenswrapper[4806]: I1125 15:56:33.024404 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/kube-rbac-proxy/0.log" Nov 25 15:56:33 crc kubenswrapper[4806]: I1125 15:56:33.063458 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/manager/0.log" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.359227 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.360092 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cd01ffb-391b-405a-8326-d13869e9be84" containerName="pruner" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.360105 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cd01ffb-391b-405a-8326-d13869e9be84" containerName="pruner" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.360286 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cd01ffb-391b-405a-8326-d13869e9be84" containerName="pruner" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.361012 4806 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.361102 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362190 4806 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362807 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362829 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362843 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362849 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362863 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362871 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362884 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362893 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362910 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362916 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362937 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362943 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 15:56:44 crc kubenswrapper[4806]: E1125 15:56:44.362956 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.362962 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363169 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363183 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363199 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363209 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363228 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.363236 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.408304 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433295 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433547 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433632 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433661 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433727 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433905 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.433960 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.434005 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536626 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536702 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536790 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536864 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536909 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536932 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.536973 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537063 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537187 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537248 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537288 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537345 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537386 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537418 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537452 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.537482 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:44 crc kubenswrapper[4806]: I1125 15:56:44.693706 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.054985 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b7103ed585d99e3a327b47baf2230d1b0d88e79840534538d4a427f89b92c797"} Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.055351 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cbe87b149d8581b10b61d3ea03b7e6fc824b87877b688450416cbf28d0a7cb12"} Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.057736 4806 generic.go:334] "Generic (PLEG): container finished" podID="0ae73747-62e9-4046-99b6-3ed9145be32b" containerID="8e306fa1822ffa7a3c5510d5cf4aface9349eed63d3c18139d8d117b39291f5b" exitCode=0 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058068 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038" gracePeriod=15 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058103 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ae73747-62e9-4046-99b6-3ed9145be32b","Type":"ContainerDied","Data":"8e306fa1822ffa7a3c5510d5cf4aface9349eed63d3c18139d8d117b39291f5b"} Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058141 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1" gracePeriod=15 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058177 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736" gracePeriod=15 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058209 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854" gracePeriod=15 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.058247 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513" gracePeriod=15 Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.064452 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 25 15:56:45 crc kubenswrapper[4806]: I1125 15:56:45.089426 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=1.0894106159999999 podStartE2EDuration="1.089410616s" podCreationTimestamp="2025-11-25 15:56:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:56:45.079097804 +0000 UTC m=+3837.731240215" watchObservedRunningTime="2025-11-25 15:56:45.089410616 +0000 UTC m=+3837.741553027" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.073747 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.076498 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.078614 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1" exitCode=0 Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.078648 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736" exitCode=0 Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.078662 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854" exitCode=0 Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.078671 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513" exitCode=2 Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.078754 4806 scope.go:117] "RemoveContainer" containerID="f3707ea8a0ba5a8ff0dbacaa3af9d32f22998d1f5c28ea018ca392ecf9f85226" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.603596 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.604204 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.684000 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock\") pod \"0ae73747-62e9-4046-99b6-3ed9145be32b\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.684174 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir\") pod \"0ae73747-62e9-4046-99b6-3ed9145be32b\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.684266 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access\") pod \"0ae73747-62e9-4046-99b6-3ed9145be32b\" (UID: \"0ae73747-62e9-4046-99b6-3ed9145be32b\") " Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.684160 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock" (OuterVolumeSpecName: "var-lock") pod "0ae73747-62e9-4046-99b6-3ed9145be32b" (UID: "0ae73747-62e9-4046-99b6-3ed9145be32b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.684193 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0ae73747-62e9-4046-99b6-3ed9145be32b" (UID: "0ae73747-62e9-4046-99b6-3ed9145be32b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.685069 4806 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.685098 4806 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0ae73747-62e9-4046-99b6-3ed9145be32b-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.689731 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0ae73747-62e9-4046-99b6-3ed9145be32b" (UID: "0ae73747-62e9-4046-99b6-3ed9145be32b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:56:46 crc kubenswrapper[4806]: I1125 15:56:46.788237 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0ae73747-62e9-4046-99b6-3ed9145be32b-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.093959 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.097677 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0ae73747-62e9-4046-99b6-3ed9145be32b","Type":"ContainerDied","Data":"0e537a790074eb497a5616b948ca1c52aff9ed821ce07c8eb32d28192b803ae0"} Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.097709 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e537a790074eb497a5616b948ca1c52aff9ed821ce07c8eb32d28192b803ae0" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.097802 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.114788 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.962565 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.965023 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.965840 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:47 crc kubenswrapper[4806]: I1125 15:56:47.966520 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.016895 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017015 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017074 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017153 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017157 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017209 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017854 4806 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017880 4806 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.017892 4806 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.101127 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.101516 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.111251 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.112689 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/prometheus-metric-storage-db-prometheus-metric-storage-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/prometheus-metric-storage-db-prometheus-metric-storage-0\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openstack/prometheus-metric-storage-0" volumeName="prometheus-metric-storage-db" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.114558 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.115189 4806 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038" exitCode=0 Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.115273 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.115912 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.116814 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.121675 4806 scope.go:117] "RemoveContainer" containerID="7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.135356 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.135603 4806 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.150543 4806 scope.go:117] "RemoveContainer" containerID="2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.183335 4806 scope.go:117] "RemoveContainer" containerID="98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.214024 4806 scope.go:117] "RemoveContainer" containerID="258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.245503 4806 scope.go:117] "RemoveContainer" containerID="9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.275981 4806 scope.go:117] "RemoveContainer" containerID="e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.303454 4806 scope.go:117] "RemoveContainer" containerID="7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.304211 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\": container with ID starting with 7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1 not found: ID does not exist" containerID="7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.304368 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1"} err="failed to get container status \"7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\": rpc error: code = NotFound desc = could not find container \"7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1\": container with ID starting with 7936c2542550f0dadc769b16b3a1fc0372cd2d159739e2270a35661f51cd2cd1 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.304480 4806 scope.go:117] "RemoveContainer" containerID="2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.304945 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\": container with ID starting with 2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736 not found: ID does not exist" containerID="2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.305069 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736"} err="failed to get container status \"2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\": rpc error: code = NotFound desc = could not find container \"2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736\": container with ID starting with 2fdd2d043279641fe379fde5151db83d49f98087c0e55995d1c0b5367a41c736 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.305158 4806 scope.go:117] "RemoveContainer" containerID="98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.305621 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\": container with ID starting with 98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854 not found: ID does not exist" containerID="98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.305662 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854"} err="failed to get container status \"98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\": rpc error: code = NotFound desc = could not find container \"98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854\": container with ID starting with 98ec8e1458dfd87e4b854be001ec1ce00beaaa30cc9afcbedfce61f05fd67854 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.305690 4806 scope.go:117] "RemoveContainer" containerID="258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.306503 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\": container with ID starting with 258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513 not found: ID does not exist" containerID="258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.306597 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513"} err="failed to get container status \"258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\": rpc error: code = NotFound desc = could not find container \"258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513\": container with ID starting with 258bd501a56e4eb0174bd2a05f348c97c7abe98bb87eedcb8c744b8ff0333513 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.306658 4806 scope.go:117] "RemoveContainer" containerID="9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.307886 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\": container with ID starting with 9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038 not found: ID does not exist" containerID="9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.308038 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038"} err="failed to get container status \"9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\": rpc error: code = NotFound desc = could not find container \"9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038\": container with ID starting with 9f60e1ee62b4530e8407d8ab16abb0f992be3661ab87335996f407d91ef59038 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.308139 4806 scope.go:117] "RemoveContainer" containerID="e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202" Nov 25 15:56:48 crc kubenswrapper[4806]: E1125 15:56:48.308719 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\": container with ID starting with e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202 not found: ID does not exist" containerID="e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.308763 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202"} err="failed to get container status \"e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\": rpc error: code = NotFound desc = could not find container \"e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202\": container with ID starting with e00949bb906ead6c59f235403e84c84d09d1610a0ee234f1d5b4d19c59f45202 not found: ID does not exist" Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.935116 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:56:48 crc kubenswrapper[4806]: I1125 15:56:48.935186 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:56:51 crc kubenswrapper[4806]: E1125 15:56:51.200265 4806 token_manager.go:121] "Couldn't update token" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/serviceaccounts/nmstate-handler/token\": dial tcp 38.102.83.234:6443: connect: connection refused" cacheKey="\"nmstate-handler\"/\"openshift-nmstate\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"nmstate-metrics-5dcf9c57c5-b4tpl\", UID:\"58a03ccb-63cd-45fe-bc04-71fcc12c3434\"}" Nov 25 15:56:52 crc kubenswrapper[4806]: I1125 15:56:52.195690 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.197177 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-state-metrics-0.187b4b103f1bce2c openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:kube-state-metrics-0,UID:9c050b95-eb84-4171-a52c-ee1e4614c301,APIVersion:v1,ResourceVersion:48901,FieldPath:spec.containers{kube-state-metrics},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 15:56:52.196027948 +0000 UTC m=+3844.848170379,LastTimestamp:2025-11-25 15:56:52.196027948 +0000 UTC m=+3844.848170379,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.981083 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.981626 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.981983 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.982280 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.982625 4806 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:52 crc kubenswrapper[4806]: I1125 15:56:52.982653 4806 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 15:56:52 crc kubenswrapper[4806]: E1125 15:56:52.982879 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="200ms" Nov 25 15:56:53 crc kubenswrapper[4806]: E1125 15:56:53.128098 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-cell1-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-cell1-server-0\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openstack/rabbitmq-cell1-server-0" volumeName="persistence" Nov 25 15:56:53 crc kubenswrapper[4806]: E1125 15:56:53.128959 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Nov 25 15:56:53 crc kubenswrapper[4806]: E1125 15:56:53.183819 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="400ms" Nov 25 15:56:53 crc kubenswrapper[4806]: E1125 15:56:53.584810 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="800ms" Nov 25 15:56:54 crc kubenswrapper[4806]: E1125 15:56:54.147455 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache]" Nov 25 15:56:54 crc kubenswrapper[4806]: E1125 15:56:54.162609 4806 token_manager.go:121] "Couldn't update token" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/serviceaccounts/default/token\": dial tcp 38.102.83.234:6443: connect: connection refused" cacheKey="\"default\"/\"openshift-nmstate\"/[]string(nil)/3607/v1.BoundObjectReference{Kind:\"Pod\", APIVersion:\"v1\", Name:\"nmstate-console-plugin-5874bd7bc5-glshj\", UID:\"d7da5810-18e1-4ece-a8d1-a3a7f9c710a4\"}" Nov 25 15:56:54 crc kubenswrapper[4806]: E1125 15:56:54.386578 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="1.6s" Nov 25 15:56:55 crc kubenswrapper[4806]: E1125 15:56:55.988492 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="3.2s" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.215890 4806 generic.go:334] "Generic (PLEG): container finished" podID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" containerID="4c1fe9b300a2e9b48e618190aa65a845fa8989130facea3c5502c99b1f61ddbc" exitCode=1 Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.215964 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" event={"ID":"2942b82c-e706-4f3e-ad7d-cef384dbcfba","Type":"ContainerDied","Data":"4c1fe9b300a2e9b48e618190aa65a845fa8989130facea3c5502c99b1f61ddbc"} Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.216762 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.216786 4806 scope.go:117] "RemoveContainer" containerID="4c1fe9b300a2e9b48e618190aa65a845fa8989130facea3c5502c99b1f61ddbc" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.216950 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.220511 4806 generic.go:334] "Generic (PLEG): container finished" podID="55283d70-ea30-4f51-8583-6d1adc92cfcb" containerID="a52aaad66e565ea72628b7272378fe64e2521d50f3339a29c2bd6a5cd0460ffe" exitCode=1 Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.220568 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerDied","Data":"a52aaad66e565ea72628b7272378fe64e2521d50f3339a29c2bd6a5cd0460ffe"} Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.221447 4806 scope.go:117] "RemoveContainer" containerID="a52aaad66e565ea72628b7272378fe64e2521d50f3339a29c2bd6a5cd0460ffe" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.221712 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.222347 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:56 crc kubenswrapper[4806]: I1125 15:56:56.222826 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.232414 4806 generic.go:334] "Generic (PLEG): container finished" podID="55283d70-ea30-4f51-8583-6d1adc92cfcb" containerID="f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9" exitCode=1 Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.232546 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerDied","Data":"f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9"} Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.233080 4806 scope.go:117] "RemoveContainer" containerID="a52aaad66e565ea72628b7272378fe64e2521d50f3339a29c2bd6a5cd0460ffe" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.233719 4806 scope.go:117] "RemoveContainer" containerID="f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9" Nov 25 15:56:57 crc kubenswrapper[4806]: E1125 15:56:57.234046 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-769f4c6fc-r7k57_metallb-system(55283d70-ea30-4f51-8583-6d1adc92cfcb)\"" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.234169 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.234665 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.235276 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.238562 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" event={"ID":"2942b82c-e706-4f3e-ad7d-cef384dbcfba","Type":"ContainerStarted","Data":"fdf34c20201eb182e72b3fe855599500cb493433a16eb08c1f1c079286a65cd6"} Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.249032 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.249434 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.254493 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:57 crc kubenswrapper[4806]: I1125 15:56:57.255091 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.096152 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.096860 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.097273 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: E1125 15:56:58.154017 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/glance-glance-default-internal-api-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/glance-glance-default-internal-api-0\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openstack/glance-default-internal-api-0" volumeName="glance" Nov 25 15:56:58 crc kubenswrapper[4806]: E1125 15:56:58.154361 4806 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/tempest-tests-tempest-0-9bb00: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/tempest-tests-tempest-0-9bb00\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" volumeName="logs-volume-0" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.254509 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.254559 4806 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b" exitCode=1 Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.254794 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b"} Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.255724 4806 scope.go:117] "RemoveContainer" containerID="cb9640bb56460bb7d1ed40effc6fa3cada4ee5369407b11a14c4a364b5b5c44b" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.255830 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.256181 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.256550 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:58 crc kubenswrapper[4806]: I1125 15:56:58.256864 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.088255 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.089334 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.089630 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.089892 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.090181 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.103678 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.103710 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:56:59 crc kubenswrapper[4806]: E1125 15:56:59.104139 4806 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.104769 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:56:59 crc kubenswrapper[4806]: E1125 15:56:59.189994 4806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.234:6443: connect: connection refused" interval="6.4s" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.265989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ce564a1909cd45940028f255b0004f35c89f6532b0fdbfd193084fad01485ec2"} Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.268945 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.269017 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb2526d3e56165283ef7527a7074a2084eb934925274e964070fdd82215e4ec1"} Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.269961 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.270538 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.270849 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.271175 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:56:59 crc kubenswrapper[4806]: I1125 15:56:59.906163 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.282832 4806 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="424092fa34a5d577d5547bf8da6cb9e78fb11798a6bfaf1bd464d4a647e5771a" exitCode=0 Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.282931 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"424092fa34a5d577d5547bf8da6cb9e78fb11798a6bfaf1bd464d4a647e5771a"} Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.283146 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.283683 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:00 crc kubenswrapper[4806]: E1125 15:57:00.283932 4806 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.284071 4806 status_manager.go:851] "Failed to get status for pod" podUID="2942b82c-e706-4f3e-ad7d-cef384dbcfba" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators-redhat/pods/loki-operator-controller-manager-8b74fc76b-wflwn\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.284781 4806 status_manager.go:851] "Failed to get status for pod" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.285301 4806 status_manager.go:851] "Failed to get status for pod" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/pods/metallb-operator-controller-manager-769f4c6fc-r7k57\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:57:00 crc kubenswrapper[4806]: I1125 15:57:00.285680 4806 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.234:6443: connect: connection refused" Nov 25 15:57:00 crc kubenswrapper[4806]: E1125 15:57:00.392165 4806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/events\": dial tcp 38.102.83.234:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-state-metrics-0.187b4b103f1bce2c openstack 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openstack,Name:kube-state-metrics-0,UID:9c050b95-eb84-4171-a52c-ee1e4614c301,APIVersion:v1,ResourceVersion:48901,FieldPath:spec.containers{kube-state-metrics},},Reason:Unhealthy,Message:Liveness probe failed: HTTP probe failed with statuscode: 503,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 15:56:52.196027948 +0000 UTC m=+3844.848170379,LastTimestamp:2025-11-25 15:56:52.196027948 +0000 UTC m=+3844.848170379,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 15:57:01 crc kubenswrapper[4806]: I1125 15:57:01.302564 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c58ede50323a44a5139f7d0b56d7db717e3a493cf3b4c74e4f9766fec55f663e"} Nov 25 15:57:01 crc kubenswrapper[4806]: I1125 15:57:01.302602 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7fe0624dc83475d7ecc15870b67d6d5d43d9d24b816da4a08828ecbe55bd1da7"} Nov 25 15:57:01 crc kubenswrapper[4806]: I1125 15:57:01.302614 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2adb3fc847d24b43ab1e9cc407d0eb4222e67d7314659f47eff42e3e64ab5e95"} Nov 25 15:57:01 crc kubenswrapper[4806]: I1125 15:57:01.363364 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 15:57:01 crc kubenswrapper[4806]: I1125 15:57:01.367761 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.226515 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.317658 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.317691 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.318152 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"094dc73d532420ece88b8e8c57ba1ca3d922083ca1b6eba783a263c39e5264c8"} Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.318203 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"36f255b3095b4cec767c37d0b1ada3d0639fe621e0fa015d480dd73c1b0f0eb1"} Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.318225 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.338877 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:57:02 crc kubenswrapper[4806]: I1125 15:57:02.339532 4806 scope.go:117] "RemoveContainer" containerID="f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9" Nov 25 15:57:02 crc kubenswrapper[4806]: E1125 15:57:02.341064 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=metallb-operator-controller-manager-769f4c6fc-r7k57_metallb-system(55283d70-ea30-4f51-8583-6d1adc92cfcb)\"" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" Nov 25 15:57:04 crc kubenswrapper[4806]: I1125 15:57:04.123516 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:04 crc kubenswrapper[4806]: I1125 15:57:04.124232 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:04 crc kubenswrapper[4806]: I1125 15:57:04.132128 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:04 crc kubenswrapper[4806]: E1125 15:57:04.500378 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.361788 4806 generic.go:334] "Generic (PLEG): container finished" podID="1df7970b-bed8-4e27-b04b-66e513683875" containerID="a4bb4f5d49c85bcca3bed07050f729a33e473b1b22813853c3526ae21689d99a" exitCode=1 Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.361840 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerDied","Data":"a4bb4f5d49c85bcca3bed07050f729a33e473b1b22813853c3526ae21689d99a"} Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.362830 4806 scope.go:117] "RemoveContainer" containerID="a4bb4f5d49c85bcca3bed07050f729a33e473b1b22813853c3526ae21689d99a" Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.366806 4806 generic.go:334] "Generic (PLEG): container finished" podID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" containerID="2fad5bb305496b231a70b8f34d0d79f39b5134e4f7e732af86b2147108ea72d3" exitCode=1 Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.366850 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerDied","Data":"2fad5bb305496b231a70b8f34d0d79f39b5134e4f7e732af86b2147108ea72d3"} Nov 25 15:57:05 crc kubenswrapper[4806]: I1125 15:57:05.367675 4806 scope.go:117] "RemoveContainer" containerID="2fad5bb305496b231a70b8f34d0d79f39b5134e4f7e732af86b2147108ea72d3" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.381539 4806 generic.go:334] "Generic (PLEG): container finished" podID="1df7970b-bed8-4e27-b04b-66e513683875" containerID="2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.381701 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerDied","Data":"2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.382139 4806 scope.go:117] "RemoveContainer" containerID="a4bb4f5d49c85bcca3bed07050f729a33e473b1b22813853c3526ae21689d99a" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.382944 4806 scope.go:117] "RemoveContainer" containerID="2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a" Nov 25 15:57:06 crc kubenswrapper[4806]: E1125 15:57:06.383254 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.389821 4806 generic.go:334] "Generic (PLEG): container finished" podID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" containerID="38ec4960d787a71f306d7d17637485fcdfefaf2be71028e8887caff38ad73108" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.389891 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerDied","Data":"38ec4960d787a71f306d7d17637485fcdfefaf2be71028e8887caff38ad73108"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.390628 4806 scope.go:117] "RemoveContainer" containerID="38ec4960d787a71f306d7d17637485fcdfefaf2be71028e8887caff38ad73108" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.406951 4806 generic.go:334] "Generic (PLEG): container finished" podID="023302d1-a345-4f55-9ac1-4a2b674e36aa" containerID="66d3100277fece3ebaec51e57459785e80c46955b033a8efd0f93c711f299b50" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.407039 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerDied","Data":"66d3100277fece3ebaec51e57459785e80c46955b033a8efd0f93c711f299b50"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.407684 4806 scope.go:117] "RemoveContainer" containerID="66d3100277fece3ebaec51e57459785e80c46955b033a8efd0f93c711f299b50" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.433735 4806 generic.go:334] "Generic (PLEG): container finished" podID="63efe3dc-03df-4494-9661-9a23a89c0974" containerID="33023ae764b8f7732a41009fc99d022b373df8ebd2d7ebcc5b10a06d1d0c7754" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.433793 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerDied","Data":"33023ae764b8f7732a41009fc99d022b373df8ebd2d7ebcc5b10a06d1d0c7754"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.434490 4806 scope.go:117] "RemoveContainer" containerID="33023ae764b8f7732a41009fc99d022b373df8ebd2d7ebcc5b10a06d1d0c7754" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.442810 4806 generic.go:334] "Generic (PLEG): container finished" podID="c1159ae9-b734-4012-b746-35d037ee4817" containerID="0ace9b21c89ba330bb9b430c1812b2367647cf057f87ea846daa097f3b315141" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.442890 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerDied","Data":"0ace9b21c89ba330bb9b430c1812b2367647cf057f87ea846daa097f3b315141"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.443681 4806 scope.go:117] "RemoveContainer" containerID="0ace9b21c89ba330bb9b430c1812b2367647cf057f87ea846daa097f3b315141" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.445579 4806 generic.go:334] "Generic (PLEG): container finished" podID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" containerID="0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98" exitCode=1 Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.445613 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerDied","Data":"0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98"} Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.446611 4806 scope.go:117] "RemoveContainer" containerID="0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98" Nov 25 15:57:06 crc kubenswrapper[4806]: E1125 15:57:06.446889 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:57:06 crc kubenswrapper[4806]: I1125 15:57:06.492473 4806 scope.go:117] "RemoveContainer" containerID="2fad5bb305496b231a70b8f34d0d79f39b5134e4f7e732af86b2147108ea72d3" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.298552 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-8b74fc76b-wflwn" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.341308 4806 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.457782 4806 generic.go:334] "Generic (PLEG): container finished" podID="61457634-dc4d-4ad9-9bdc-c95aae5df022" containerID="c0a7d9f15f2c0d8cf95a32752e649092e170d008a8a85cd29a613ccbf062a7bb" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.457822 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerDied","Data":"c0a7d9f15f2c0d8cf95a32752e649092e170d008a8a85cd29a613ccbf062a7bb"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.458773 4806 scope.go:117] "RemoveContainer" containerID="c0a7d9f15f2c0d8cf95a32752e649092e170d008a8a85cd29a613ccbf062a7bb" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.461998 4806 generic.go:334] "Generic (PLEG): container finished" podID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" containerID="47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.462077 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerDied","Data":"47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.462141 4806 scope.go:117] "RemoveContainer" containerID="38ec4960d787a71f306d7d17637485fcdfefaf2be71028e8887caff38ad73108" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.462697 4806 scope.go:117] "RemoveContainer" containerID="47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b" Nov 25 15:57:07 crc kubenswrapper[4806]: E1125 15:57:07.462997 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.466583 4806 generic.go:334] "Generic (PLEG): container finished" podID="2a080dd6-0904-4756-8b02-39d10465fea2" containerID="b9b9f8ea6c3f6b55997a111211be457e64faa838a61c15d6c4ebe42531affe52" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.466663 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerDied","Data":"b9b9f8ea6c3f6b55997a111211be457e64faa838a61c15d6c4ebe42531affe52"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.467537 4806 scope.go:117] "RemoveContainer" containerID="b9b9f8ea6c3f6b55997a111211be457e64faa838a61c15d6c4ebe42531affe52" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.470343 4806 generic.go:334] "Generic (PLEG): container finished" podID="537dc134-0732-4dfc-b0be-9c16d3d191be" containerID="5ba1c65c7e44365e690f6d1f20930029db4c687bbe3e6b21836d6c0a97ec5a92" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.470416 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerDied","Data":"5ba1c65c7e44365e690f6d1f20930029db4c687bbe3e6b21836d6c0a97ec5a92"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.471520 4806 scope.go:117] "RemoveContainer" containerID="5ba1c65c7e44365e690f6d1f20930029db4c687bbe3e6b21836d6c0a97ec5a92" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.485822 4806 generic.go:334] "Generic (PLEG): container finished" podID="8294cfe0-6c14-49bc-bd5b-d614a68893ce" containerID="cf4ab4cc9e2934f3786dce00d83d4d816e1a9646282fcddfdde67f22ced89a1f" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.485912 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerDied","Data":"cf4ab4cc9e2934f3786dce00d83d4d816e1a9646282fcddfdde67f22ced89a1f"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.486760 4806 scope.go:117] "RemoveContainer" containerID="cf4ab4cc9e2934f3786dce00d83d4d816e1a9646282fcddfdde67f22ced89a1f" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.490987 4806 generic.go:334] "Generic (PLEG): container finished" podID="fbf78fa8-8b88-454e-a7dc-0e75f463bc45" containerID="f5778c542722a20ee02a9a3f06a4bdf25708e7aa27fe27fa79ed522fc527c7a0" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.491089 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerDied","Data":"f5778c542722a20ee02a9a3f06a4bdf25708e7aa27fe27fa79ed522fc527c7a0"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.491901 4806 scope.go:117] "RemoveContainer" containerID="f5778c542722a20ee02a9a3f06a4bdf25708e7aa27fe27fa79ed522fc527c7a0" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.500268 4806 generic.go:334] "Generic (PLEG): container finished" podID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" containerID="9c5f142297a42528951601947da3c70b72b4321eda5c6d136413e0d96bc995dd" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.500410 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerDied","Data":"9c5f142297a42528951601947da3c70b72b4321eda5c6d136413e0d96bc995dd"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.501233 4806 scope.go:117] "RemoveContainer" containerID="9c5f142297a42528951601947da3c70b72b4321eda5c6d136413e0d96bc995dd" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.535416 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec8a3bcc-2127-44bc-8f89-db3ece24a9b9" containerID="b47c894be1fc9d3ddec4b41e3d12acded7c87aaa42dc9a90df57d7e75bcd8512" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.535699 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerDied","Data":"b47c894be1fc9d3ddec4b41e3d12acded7c87aaa42dc9a90df57d7e75bcd8512"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.536449 4806 scope.go:117] "RemoveContainer" containerID="b47c894be1fc9d3ddec4b41e3d12acded7c87aaa42dc9a90df57d7e75bcd8512" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.560422 4806 generic.go:334] "Generic (PLEG): container finished" podID="63efe3dc-03df-4494-9661-9a23a89c0974" containerID="8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.560512 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerDied","Data":"8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.561454 4806 scope.go:117] "RemoveContainer" containerID="8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56" Nov 25 15:57:07 crc kubenswrapper[4806]: E1125 15:57:07.561979 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.580477 4806 generic.go:334] "Generic (PLEG): container finished" podID="d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7" containerID="d2f0e7fe2e7c7ebb4ac49b1098b7e31338d32690364de37dec7e8ca49dce5f1a" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.580776 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerDied","Data":"d2f0e7fe2e7c7ebb4ac49b1098b7e31338d32690364de37dec7e8ca49dce5f1a"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.581661 4806 scope.go:117] "RemoveContainer" containerID="d2f0e7fe2e7c7ebb4ac49b1098b7e31338d32690364de37dec7e8ca49dce5f1a" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.585699 4806 generic.go:334] "Generic (PLEG): container finished" podID="4877ab9d-8cd3-4270-915f-c73167e93b49" containerID="dab89d285e58e0bf73be55a651247515bc08d486e7796f08dee40cee0ded5cee" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.585860 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" event={"ID":"4877ab9d-8cd3-4270-915f-c73167e93b49","Type":"ContainerDied","Data":"dab89d285e58e0bf73be55a651247515bc08d486e7796f08dee40cee0ded5cee"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.587004 4806 scope.go:117] "RemoveContainer" containerID="dab89d285e58e0bf73be55a651247515bc08d486e7796f08dee40cee0ded5cee" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.597247 4806 generic.go:334] "Generic (PLEG): container finished" podID="461ceb26-b86c-4bb8-9550-131351dfa3e5" containerID="b9b740009a808f3b280568bdbca5eb2af6f37caba0f58b2b4c0d1dcc8d4ad842" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.597524 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerDied","Data":"b9b740009a808f3b280568bdbca5eb2af6f37caba0f58b2b4c0d1dcc8d4ad842"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.600488 4806 scope.go:117] "RemoveContainer" containerID="b9b740009a808f3b280568bdbca5eb2af6f37caba0f58b2b4c0d1dcc8d4ad842" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.613487 4806 scope.go:117] "RemoveContainer" containerID="33023ae764b8f7732a41009fc99d022b373df8ebd2d7ebcc5b10a06d1d0c7754" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.634613 4806 generic.go:334] "Generic (PLEG): container finished" podID="de253966-f7ff-485f-8108-b8ee0fd795bf" containerID="b9cddda6fd0d81afc149bb233d62ea5b2c229d34857d532b34868b2d96f7023e" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.634662 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerDied","Data":"b9cddda6fd0d81afc149bb233d62ea5b2c229d34857d532b34868b2d96f7023e"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.635721 4806 scope.go:117] "RemoveContainer" containerID="b9cddda6fd0d81afc149bb233d62ea5b2c229d34857d532b34868b2d96f7023e" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.639960 4806 generic.go:334] "Generic (PLEG): container finished" podID="023302d1-a345-4f55-9ac1-4a2b674e36aa" containerID="a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.640051 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerDied","Data":"a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.640458 4806 scope.go:117] "RemoveContainer" containerID="a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894" Nov 25 15:57:07 crc kubenswrapper[4806]: E1125 15:57:07.640746 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.642353 4806 generic.go:334] "Generic (PLEG): container finished" podID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" containerID="2dff34746d4c23c4f1049058de88d626a301c378228c5e62171235a1b3185e7b" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.642518 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerDied","Data":"2dff34746d4c23c4f1049058de88d626a301c378228c5e62171235a1b3185e7b"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.644104 4806 scope.go:117] "RemoveContainer" containerID="2dff34746d4c23c4f1049058de88d626a301c378228c5e62171235a1b3185e7b" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.646283 4806 generic.go:334] "Generic (PLEG): container finished" podID="dbedcc0b-12de-4497-a9f3-a9df6c88a74f" containerID="3e97a83745b3a29044d7a8dacb5fc07334fac77cf8d7c8954308be6a7b1fe747" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.646307 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerDied","Data":"3e97a83745b3a29044d7a8dacb5fc07334fac77cf8d7c8954308be6a7b1fe747"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.648655 4806 scope.go:117] "RemoveContainer" containerID="3e97a83745b3a29044d7a8dacb5fc07334fac77cf8d7c8954308be6a7b1fe747" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.651705 4806 generic.go:334] "Generic (PLEG): container finished" podID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" containerID="ba810bf7af63f00f329f5f77fb29ea46e9f889f5b53d88cd25b572b713949905" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.651769 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerDied","Data":"ba810bf7af63f00f329f5f77fb29ea46e9f889f5b53d88cd25b572b713949905"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.652523 4806 scope.go:117] "RemoveContainer" containerID="ba810bf7af63f00f329f5f77fb29ea46e9f889f5b53d88cd25b572b713949905" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.655683 4806 generic.go:334] "Generic (PLEG): container finished" podID="c1159ae9-b734-4012-b746-35d037ee4817" containerID="7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.655790 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerDied","Data":"7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.656255 4806 scope.go:117] "RemoveContainer" containerID="7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6" Nov 25 15:57:07 crc kubenswrapper[4806]: E1125 15:57:07.656882 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.663325 4806 generic.go:334] "Generic (PLEG): container finished" podID="40a580de-1093-4adc-a98c-e18202bee9e3" containerID="4d484cc4f798791aa2650cd62a25f5acbdfb1760eeb9df81db216a97c7e082c0" exitCode=1 Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.663738 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.663757 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.663958 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerDied","Data":"4d484cc4f798791aa2650cd62a25f5acbdfb1760eeb9df81db216a97c7e082c0"} Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.664710 4806 scope.go:117] "RemoveContainer" containerID="4d484cc4f798791aa2650cd62a25f5acbdfb1760eeb9df81db216a97c7e082c0" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.670087 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.943525 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.945723 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:57:07 crc kubenswrapper[4806]: I1125 15:57:07.964765 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.082517 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.150262 4806 scope.go:117] "RemoveContainer" containerID="66d3100277fece3ebaec51e57459785e80c46955b033a8efd0f93c711f299b50" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.248010 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.264147 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.363573 4806 scope.go:117] "RemoveContainer" containerID="0ace9b21c89ba330bb9b430c1812b2367647cf057f87ea846daa097f3b315141" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.378667 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.391150 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.616022 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.640377 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.654858 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.663978 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.685032 4806 scope.go:117] "RemoveContainer" containerID="47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.685421 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.694231 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.695502 4806 scope.go:117] "RemoveContainer" containerID="2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.695816 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.702397 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.735445 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.755204 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerStarted","Data":"5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40"} Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.755376 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.766713 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.767595 4806 scope.go:117] "RemoveContainer" containerID="0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.767834 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.773693 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerStarted","Data":"2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1"} Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.773962 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.775608 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.781349 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.790932 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerStarted","Data":"d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0"} Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.791936 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.799079 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.809478 4806 scope.go:117] "RemoveContainer" containerID="8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.810013 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.813422 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.820014 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.828529 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.830739 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.836202 4806 scope.go:117] "RemoveContainer" containerID="7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.836716 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.842392 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699"} Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.843482 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.848865 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.853753 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.858340 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859083 4806 scope.go:117] "RemoveContainer" containerID="a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859119 4806 generic.go:334] "Generic (PLEG): container finished" podID="61457634-dc4d-4ad9-9bdc-c95aae5df022" containerID="e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990" exitCode=1 Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859226 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerDied","Data":"e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990"} Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859339 4806 scope.go:117] "RemoveContainer" containerID="c0a7d9f15f2c0d8cf95a32752e649092e170d008a8a85cd29a613ccbf062a7bb" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.859399 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859792 4806 scope.go:117] "RemoveContainer" containerID="e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859827 4806 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:08 crc kubenswrapper[4806]: I1125 15:57:08.859847 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="833c1bd1-6a94-4c25-b5bf-a9ed3d1b3229" Nov 25 15:57:08 crc kubenswrapper[4806]: E1125 15:57:08.860013 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.523844 4806 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="75e5e6e1-0ef3-46c5-bae8-bcfd2ed9c6ff" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.875401 4806 generic.go:334] "Generic (PLEG): container finished" podID="d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7" containerID="09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.875454 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerDied","Data":"09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.875488 4806 scope.go:117] "RemoveContainer" containerID="d2f0e7fe2e7c7ebb4ac49b1098b7e31338d32690364de37dec7e8ca49dce5f1a" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.878895 4806 scope.go:117] "RemoveContainer" containerID="09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.879608 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c5xhr_openstack-operators(d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" podUID="d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.882305 4806 generic.go:334] "Generic (PLEG): container finished" podID="2a080dd6-0904-4756-8b02-39d10465fea2" containerID="2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.883509 4806 scope.go:117] "RemoveContainer" containerID="2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.883726 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerDied","Data":"2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1"} Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.883842 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-cqwgq_openstack-operators(2a080dd6-0904-4756-8b02-39d10465fea2)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" podUID="2a080dd6-0904-4756-8b02-39d10465fea2" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.887456 4806 generic.go:334] "Generic (PLEG): container finished" podID="de253966-f7ff-485f-8108-b8ee0fd795bf" containerID="a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.887532 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerDied","Data":"a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.888165 4806 scope.go:117] "RemoveContainer" containerID="a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.888542 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-wfsxk_openstack-operators(de253966-f7ff-485f-8108-b8ee0fd795bf)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" podUID="de253966-f7ff-485f-8108-b8ee0fd795bf" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.890266 4806 generic.go:334] "Generic (PLEG): container finished" podID="537dc134-0732-4dfc-b0be-9c16d3d191be" containerID="94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.890342 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerDied","Data":"94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.891110 4806 scope.go:117] "RemoveContainer" containerID="94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.891448 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-qk9m2_openstack-operators(537dc134-0732-4dfc-b0be-9c16d3d191be)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" podUID="537dc134-0732-4dfc-b0be-9c16d3d191be" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.894440 4806 generic.go:334] "Generic (PLEG): container finished" podID="dbedcc0b-12de-4497-a9f3-a9df6c88a74f" containerID="ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.894654 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerDied","Data":"ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.895817 4806 scope.go:117] "RemoveContainer" containerID="ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.896239 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-687f46fc78-xdmx6_openstack-operators(dbedcc0b-12de-4497-a9f3-a9df6c88a74f)\"" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" podUID="dbedcc0b-12de-4497-a9f3-a9df6c88a74f" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.898672 4806 generic.go:334] "Generic (PLEG): container finished" podID="461ceb26-b86c-4bb8-9550-131351dfa3e5" containerID="d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.898789 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerDied","Data":"d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.899905 4806 scope.go:117] "RemoveContainer" containerID="d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.900244 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-h9qg8_openstack-operators(461ceb26-b86c-4bb8-9550-131351dfa3e5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" podUID="461ceb26-b86c-4bb8-9550-131351dfa3e5" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.903699 4806 generic.go:334] "Generic (PLEG): container finished" podID="40a580de-1093-4adc-a98c-e18202bee9e3" containerID="3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.903766 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerDied","Data":"3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.904558 4806 scope.go:117] "RemoveContainer" containerID="3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.904920 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-w6686_openstack-operators(40a580de-1093-4adc-a98c-e18202bee9e3)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" podUID="40a580de-1093-4adc-a98c-e18202bee9e3" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.907487 4806 generic.go:334] "Generic (PLEG): container finished" podID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" containerID="2d0dc6bee41ecdaf4e2ae149c6becb1ef27f42826af6f68b4281004329c220ba" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.907596 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerDied","Data":"2d0dc6bee41ecdaf4e2ae149c6becb1ef27f42826af6f68b4281004329c220ba"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.908368 4806 scope.go:117] "RemoveContainer" containerID="2d0dc6bee41ecdaf4e2ae149c6becb1ef27f42826af6f68b4281004329c220ba" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.908668 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-2snr9_openstack-operators(fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" podUID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.910685 4806 generic.go:334] "Generic (PLEG): container finished" podID="ec8a3bcc-2127-44bc-8f89-db3ece24a9b9" containerID="8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.910786 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerDied","Data":"8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.911231 4806 scope.go:117] "RemoveContainer" containerID="8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.911793 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.912139 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-q6z52_openstack-operators(ec8a3bcc-2127-44bc-8f89-db3ece24a9b9)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" podUID="ec8a3bcc-2127-44bc-8f89-db3ece24a9b9" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.914796 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" event={"ID":"4877ab9d-8cd3-4270-915f-c73167e93b49","Type":"ContainerStarted","Data":"f229228e6f0ac7585f8dc791aeb1027b7b76acdb5df5d9e211e13489ae15198b"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.919473 4806 scope.go:117] "RemoveContainer" containerID="e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.919721 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.921614 4806 generic.go:334] "Generic (PLEG): container finished" podID="8294cfe0-6c14-49bc-bd5b-d614a68893ce" containerID="d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.921673 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerDied","Data":"d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.922131 4806 scope.go:117] "RemoveContainer" containerID="d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.922499 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-jcrbm_openstack-operators(8294cfe0-6c14-49bc-bd5b-d614a68893ce)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" podUID="8294cfe0-6c14-49bc-bd5b-d614a68893ce" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.924549 4806 generic.go:334] "Generic (PLEG): container finished" podID="fbf78fa8-8b88-454e-a7dc-0e75f463bc45" containerID="5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.924600 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerDied","Data":"5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.925007 4806 scope.go:117] "RemoveContainer" containerID="5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.925226 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-r8dnj_openstack-operators(fbf78fa8-8b88-454e-a7dc-0e75f463bc45)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" podUID="fbf78fa8-8b88-454e-a7dc-0e75f463bc45" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.927534 4806 generic.go:334] "Generic (PLEG): container finished" podID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" containerID="fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.927581 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerDied","Data":"fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.927949 4806 scope.go:117] "RemoveContainer" containerID="fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.928194 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.939463 4806 generic.go:334] "Generic (PLEG): container finished" podID="9cc0ebc5-e3d4-4bae-8b33-032d950705ff" containerID="c174a8987bd879647aacf86f977e75ea5653757984a42f7c296ee0655e02a9ab" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.939551 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerDied","Data":"c174a8987bd879647aacf86f977e75ea5653757984a42f7c296ee0655e02a9ab"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.942749 4806 scope.go:117] "RemoveContainer" containerID="c174a8987bd879647aacf86f977e75ea5653757984a42f7c296ee0655e02a9ab" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.947064 4806 generic.go:334] "Generic (PLEG): container finished" podID="8fe87500-5164-48de-a495-f6d74b05b7f9" containerID="ada592e4c2506aff56f8a5b7ebaf6e416e1835db21b9704c36fc651546129603" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.947139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" event={"ID":"8fe87500-5164-48de-a495-f6d74b05b7f9","Type":"ContainerDied","Data":"ada592e4c2506aff56f8a5b7ebaf6e416e1835db21b9704c36fc651546129603"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.947991 4806 scope.go:117] "RemoveContainer" containerID="ada592e4c2506aff56f8a5b7ebaf6e416e1835db21b9704c36fc651546129603" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.949758 4806 generic.go:334] "Generic (PLEG): container finished" podID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" containerID="ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f" exitCode=1 Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.949813 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerDied","Data":"ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f"} Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.950576 4806 scope.go:117] "RemoveContainer" containerID="ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f" Nov 25 15:57:09 crc kubenswrapper[4806]: E1125 15:57:09.950824 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7c468db9ff-2r8gr_openstack-operators(b97ff802-8b8f-47d4-bff1-7d6876f780ff)\"" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podUID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" Nov 25 15:57:09 crc kubenswrapper[4806]: I1125 15:57:09.963962 4806 scope.go:117] "RemoveContainer" containerID="b9b9f8ea6c3f6b55997a111211be457e64faa838a61c15d6c4ebe42531affe52" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.058538 4806 scope.go:117] "RemoveContainer" containerID="b9cddda6fd0d81afc149bb233d62ea5b2c229d34857d532b34868b2d96f7023e" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.141745 4806 scope.go:117] "RemoveContainer" containerID="5ba1c65c7e44365e690f6d1f20930029db4c687bbe3e6b21836d6c0a97ec5a92" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.234094 4806 scope.go:117] "RemoveContainer" containerID="3e97a83745b3a29044d7a8dacb5fc07334fac77cf8d7c8954308be6a7b1fe747" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.363730 4806 scope.go:117] "RemoveContainer" containerID="b9b740009a808f3b280568bdbca5eb2af6f37caba0f58b2b4c0d1dcc8d4ad842" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.550355 4806 scope.go:117] "RemoveContainer" containerID="4d484cc4f798791aa2650cd62a25f5acbdfb1760eeb9df81db216a97c7e082c0" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.727984 4806 scope.go:117] "RemoveContainer" containerID="ba810bf7af63f00f329f5f77fb29ea46e9f889f5b53d88cd25b572b713949905" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.971980 4806 scope.go:117] "RemoveContainer" containerID="3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709" Nov 25 15:57:10 crc kubenswrapper[4806]: E1125 15:57:10.972263 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-w6686_openstack-operators(40a580de-1093-4adc-a98c-e18202bee9e3)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" podUID="40a580de-1093-4adc-a98c-e18202bee9e3" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.975153 4806 scope.go:117] "RemoveContainer" containerID="94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb" Nov 25 15:57:10 crc kubenswrapper[4806]: E1125 15:57:10.975436 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-qk9m2_openstack-operators(537dc134-0732-4dfc-b0be-9c16d3d191be)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" podUID="537dc134-0732-4dfc-b0be-9c16d3d191be" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.982189 4806 generic.go:334] "Generic (PLEG): container finished" podID="b3220f94-14c9-4820-9d1b-6b4bb1b635fd" containerID="c55e92f5c825e60f50c87d3013e9b535cfd09ba37bdadcf7a992285d5daf3ed2" exitCode=1 Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.982267 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" event={"ID":"b3220f94-14c9-4820-9d1b-6b4bb1b635fd","Type":"ContainerDied","Data":"c55e92f5c825e60f50c87d3013e9b535cfd09ba37bdadcf7a992285d5daf3ed2"} Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.982963 4806 scope.go:117] "RemoveContainer" containerID="c55e92f5c825e60f50c87d3013e9b535cfd09ba37bdadcf7a992285d5daf3ed2" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.988170 4806 scope.go:117] "RemoveContainer" containerID="d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f" Nov 25 15:57:10 crc kubenswrapper[4806]: E1125 15:57:10.988509 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-h9qg8_openstack-operators(461ceb26-b86c-4bb8-9550-131351dfa3e5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" podUID="461ceb26-b86c-4bb8-9550-131351dfa3e5" Nov 25 15:57:10 crc kubenswrapper[4806]: I1125 15:57:10.992668 4806 scope.go:117] "RemoveContainer" containerID="09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d" Nov 25 15:57:10 crc kubenswrapper[4806]: E1125 15:57:10.993096 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c5xhr_openstack-operators(d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" podUID="d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.001460 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerStarted","Data":"de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d"} Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.001748 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.003953 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" event={"ID":"8fe87500-5164-48de-a495-f6d74b05b7f9","Type":"ContainerStarted","Data":"0688e42cee90a3692e707f9c80f786c83db5535ebd0f8a107a5cff7c41696a66"} Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.004152 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.008261 4806 scope.go:117] "RemoveContainer" containerID="ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.008601 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-687f46fc78-xdmx6_openstack-operators(dbedcc0b-12de-4497-a9f3-a9df6c88a74f)\"" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" podUID="dbedcc0b-12de-4497-a9f3-a9df6c88a74f" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.010913 4806 scope.go:117] "RemoveContainer" containerID="2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.011186 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-cqwgq_openstack-operators(2a080dd6-0904-4756-8b02-39d10465fea2)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" podUID="2a080dd6-0904-4756-8b02-39d10465fea2" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.013705 4806 scope.go:117] "RemoveContainer" containerID="a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.013773 4806 scope.go:117] "RemoveContainer" containerID="8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.013904 4806 scope.go:117] "RemoveContainer" containerID="d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.014018 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-wfsxk_openstack-operators(de253966-f7ff-485f-8108-b8ee0fd795bf)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" podUID="de253966-f7ff-485f-8108-b8ee0fd795bf" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.014042 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-q6z52_openstack-operators(ec8a3bcc-2127-44bc-8f89-db3ece24a9b9)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" podUID="ec8a3bcc-2127-44bc-8f89-db3ece24a9b9" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.014151 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-jcrbm_openstack-operators(8294cfe0-6c14-49bc-bd5b-d614a68893ce)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" podUID="8294cfe0-6c14-49bc-bd5b-d614a68893ce" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.014226 4806 scope.go:117] "RemoveContainer" containerID="fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.014262 4806 scope.go:117] "RemoveContainer" containerID="5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.014448 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:11 crc kubenswrapper[4806]: E1125 15:57:11.014572 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-r8dnj_openstack-operators(fbf78fa8-8b88-454e-a7dc-0e75f463bc45)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" podUID="fbf78fa8-8b88-454e-a7dc-0e75f463bc45" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.068741 4806 scope.go:117] "RemoveContainer" containerID="b47c894be1fc9d3ddec4b41e3d12acded7c87aaa42dc9a90df57d7e75bcd8512" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.259048 4806 scope.go:117] "RemoveContainer" containerID="cf4ab4cc9e2934f3786dce00d83d4d816e1a9646282fcddfdde67f22ced89a1f" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.344858 4806 scope.go:117] "RemoveContainer" containerID="f5778c542722a20ee02a9a3f06a4bdf25708e7aa27fe27fa79ed522fc527c7a0" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.488940 4806 scope.go:117] "RemoveContainer" containerID="9c5f142297a42528951601947da3c70b72b4321eda5c6d136413e0d96bc995dd" Nov 25 15:57:11 crc kubenswrapper[4806]: I1125 15:57:11.589523 4806 scope.go:117] "RemoveContainer" containerID="2dff34746d4c23c4f1049058de88d626a301c378228c5e62171235a1b3185e7b" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.031067 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" event={"ID":"b3220f94-14c9-4820-9d1b-6b4bb1b635fd","Type":"ContainerStarted","Data":"f71f15b6351db16d58b82c24e39e0959335fadf0ee5c8621cca140a995cb80f8"} Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.032174 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.046949 4806 generic.go:334] "Generic (PLEG): container finished" podID="9cc0ebc5-e3d4-4bae-8b33-032d950705ff" containerID="de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d" exitCode=1 Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.047011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerDied","Data":"de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d"} Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.047078 4806 scope.go:117] "RemoveContainer" containerID="c174a8987bd879647aacf86f977e75ea5653757984a42f7c296ee0655e02a9ab" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.047688 4806 scope.go:117] "RemoveContainer" containerID="de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d" Nov 25 15:57:12 crc kubenswrapper[4806]: E1125 15:57:12.048152 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-bwwh4_openstack-operators(9cc0ebc5-e3d4-4bae-8b33-032d950705ff)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" podUID="9cc0ebc5-e3d4-4bae-8b33-032d950705ff" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.194065 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerName="kube-state-metrics" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.194387 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.195303 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.195430 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerName="kube-state-metrics" containerID="cri-o://cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9" gracePeriod=30 Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.502069 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.703212 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:57:12 crc kubenswrapper[4806]: I1125 15:57:12.704512 4806 scope.go:117] "RemoveContainer" containerID="ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f" Nov 25 15:57:12 crc kubenswrapper[4806]: E1125 15:57:12.705221 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=openstack-operator-controller-manager-7c468db9ff-2r8gr_openstack-operators(b97ff802-8b8f-47d4-bff1-7d6876f780ff)\"" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podUID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" Nov 25 15:57:13 crc kubenswrapper[4806]: I1125 15:57:13.077866 4806 scope.go:117] "RemoveContainer" containerID="de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d" Nov 25 15:57:13 crc kubenswrapper[4806]: E1125 15:57:13.078181 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-bwwh4_openstack-operators(9cc0ebc5-e3d4-4bae-8b33-032d950705ff)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" podUID="9cc0ebc5-e3d4-4bae-8b33-032d950705ff" Nov 25 15:57:13 crc kubenswrapper[4806]: I1125 15:57:13.078776 4806 generic.go:334] "Generic (PLEG): container finished" podID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerID="cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9" exitCode=2 Nov 25 15:57:13 crc kubenswrapper[4806]: I1125 15:57:13.078914 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerDied","Data":"cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9"} Nov 25 15:57:14 crc kubenswrapper[4806]: I1125 15:57:14.097450 4806 generic.go:334] "Generic (PLEG): container finished" podID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerID="278dbc08168029043c8641e691c58edb8d189ed3015ea7febf2fc5ff9d5866ca" exitCode=1 Nov 25 15:57:14 crc kubenswrapper[4806]: I1125 15:57:14.098360 4806 scope.go:117] "RemoveContainer" containerID="278dbc08168029043c8641e691c58edb8d189ed3015ea7febf2fc5ff9d5866ca" Nov 25 15:57:14 crc kubenswrapper[4806]: I1125 15:57:14.101301 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerDied","Data":"278dbc08168029043c8641e691c58edb8d189ed3015ea7febf2fc5ff9d5866ca"} Nov 25 15:57:14 crc kubenswrapper[4806]: I1125 15:57:14.101383 4806 scope.go:117] "RemoveContainer" containerID="cf644d795bc915975201c7fec89c55d56e0f456a484dc74bdd31850914009ad9" Nov 25 15:57:14 crc kubenswrapper[4806]: E1125 15:57:14.787505 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache]" Nov 25 15:57:16 crc kubenswrapper[4806]: I1125 15:57:16.900597 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-kcgrl" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.089852 4806 scope.go:117] "RemoveContainer" containerID="f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.144646 4806 generic.go:334] "Generic (PLEG): container finished" podID="9c050b95-eb84-4171-a52c-ee1e4614c301" containerID="709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706" exitCode=1 Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.144698 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerDied","Data":"709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706"} Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.144741 4806 scope.go:117] "RemoveContainer" containerID="278dbc08168029043c8641e691c58edb8d189ed3015ea7febf2fc5ff9d5866ca" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.145634 4806 scope.go:117] "RemoveContainer" containerID="709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706" Nov 25 15:57:17 crc kubenswrapper[4806]: E1125 15:57:17.145935 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(9c050b95-eb84-4171-a52c-ee1e4614c301)\"" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.370430 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.485790 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.522377 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.536629 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.570336 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-qx59x" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.647464 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.943109 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.944003 4806 scope.go:117] "RemoveContainer" containerID="3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709" Nov 25 15:57:17 crc kubenswrapper[4806]: E1125 15:57:17.944404 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=cinder-operator-controller-manager-79856dc55c-w6686_openstack-operators(40a580de-1093-4adc-a98c-e18202bee9e3)\"" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" podUID="40a580de-1093-4adc-a98c-e18202bee9e3" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.945091 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.945579 4806 scope.go:117] "RemoveContainer" containerID="94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb" Nov 25 15:57:17 crc kubenswrapper[4806]: E1125 15:57:17.945834 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=barbican-operator-controller-manager-86dc4d89c8-qk9m2_openstack-operators(537dc134-0732-4dfc-b0be-9c16d3d191be)\"" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" podUID="537dc134-0732-4dfc-b0be-9c16d3d191be" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.955834 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.965451 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.966344 4806 scope.go:117] "RemoveContainer" containerID="a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26" Nov 25 15:57:17 crc kubenswrapper[4806]: E1125 15:57:17.966629 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=designate-operator-controller-manager-7d695c9b56-wfsxk_openstack-operators(de253966-f7ff-485f-8108-b8ee0fd795bf)\"" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" podUID="de253966-f7ff-485f-8108-b8ee0fd795bf" Nov 25 15:57:17 crc kubenswrapper[4806]: I1125 15:57:17.998229 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.063559 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.080569 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.081389 4806 scope.go:117] "RemoveContainer" containerID="5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.081672 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=glance-operator-controller-manager-68b95954c9-r8dnj_openstack-operators(fbf78fa8-8b88-454e-a7dc-0e75f463bc45)\"" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" podUID="fbf78fa8-8b88-454e-a7dc-0e75f463bc45" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.105169 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.143610 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.155887 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.162394 4806 scope.go:117] "RemoveContainer" containerID="709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.162620 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(9c050b95-eb84-4171-a52c-ee1e4614c301)\"" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.164199 4806 generic.go:334] "Generic (PLEG): container finished" podID="55283d70-ea30-4f51-8583-6d1adc92cfcb" containerID="33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f" exitCode=1 Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.164227 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerDied","Data":"33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f"} Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.164251 4806 scope.go:117] "RemoveContainer" containerID="f0496ed5afb902b2ce05d99889f62b33b20df43a83471fddf4e019c1461cfdb9" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.164610 4806 scope.go:117] "RemoveContainer" containerID="33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.164814 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-769f4c6fc-r7k57_metallb-system(55283d70-ea30-4f51-8583-6d1adc92cfcb)\"" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.174294 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.213608 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.247994 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.248890 4806 scope.go:117] "RemoveContainer" containerID="8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.249363 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=ironic-operator-controller-manager-5bfcdc958c-q6z52_openstack-operators(ec8a3bcc-2127-44bc-8f89-db3ece24a9b9)\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" podUID="ec8a3bcc-2127-44bc-8f89-db3ece24a9b9" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.264266 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.264336 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.265056 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-mjvjq" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.265174 4806 scope.go:117] "RemoveContainer" containerID="e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.377455 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.383294 4806 scope.go:117] "RemoveContainer" containerID="d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.383618 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=heat-operator-controller-manager-774b86978c-jcrbm_openstack-operators(8294cfe0-6c14-49bc-bd5b-d614a68893ce)\"" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" podUID="8294cfe0-6c14-49bc-bd5b-d614a68893ce" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.390562 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.390694 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.391301 4806 scope.go:117] "RemoveContainer" containerID="d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.391567 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=horizon-operator-controller-manager-68c9694994-h9qg8_openstack-operators(461ceb26-b86c-4bb8-9550-131351dfa3e5)\"" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" podUID="461ceb26-b86c-4bb8-9550-131351dfa3e5" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.445380 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.460667 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.571961 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.573005 4806 scope.go:117] "RemoveContainer" containerID="de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.573458 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=manila-operator-controller-manager-58bb8d67cc-bwwh4_openstack-operators(9cc0ebc5-e3d4-4bae-8b33-032d950705ff)\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" podUID="9cc0ebc5-e3d4-4bae-8b33-032d950705ff" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.615937 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.617167 4806 scope.go:117] "RemoveContainer" containerID="7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.640369 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.641237 4806 scope.go:117] "RemoveContainer" containerID="09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.641568 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=neutron-operator-controller-manager-7c57c8bbc4-c5xhr_openstack-operators(d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7)\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" podUID="d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.654564 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.655760 4806 scope.go:117] "RemoveContainer" containerID="47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.671544 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.672664 4806 scope.go:117] "RemoveContainer" containerID="8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.692630 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.693544 4806 scope.go:117] "RemoveContainer" containerID="2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.702454 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.702859 4806 scope.go:117] "RemoveContainer" containerID="2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.703065 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=octavia-operator-controller-manager-fd75fd47d-cqwgq_openstack-operators(2a080dd6-0904-4756-8b02-39d10465fea2)\"" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" podUID="2a080dd6-0904-4756-8b02-39d10465fea2" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.758483 4806 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.766941 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.767731 4806 scope.go:117] "RemoveContainer" containerID="0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.770491 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.777511 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.778240 4806 scope.go:117] "RemoveContainer" containerID="ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357" Nov 25 15:57:18 crc kubenswrapper[4806]: E1125 15:57:18.778672 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=telemetry-operator-controller-manager-687f46fc78-xdmx6_openstack-operators(dbedcc0b-12de-4497-a9f3-a9df6c88a74f)\"" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" podUID="dbedcc0b-12de-4497-a9f3-a9df6c88a74f" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.803385 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.819871 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-wnx44" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.820392 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.824622 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.834167 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-6xrjb" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.859006 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.860130 4806 scope.go:117] "RemoveContainer" containerID="a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.898184 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.908616 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.935264 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.935342 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.935393 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.936157 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.936210 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" gracePeriod=600 Nov 25 15:57:18 crc kubenswrapper[4806]: I1125 15:57:18.966341 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.055907 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.108794 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.190413 4806 generic.go:334] "Generic (PLEG): container finished" podID="61457634-dc4d-4ad9-9bdc-c95aae5df022" containerID="ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7" exitCode=1 Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.190491 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerDied","Data":"ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7"} Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.190533 4806 scope.go:117] "RemoveContainer" containerID="e201866bdb5574817e7779f284d2af66d967e6ac4db12ded3382816e94a59990" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.191271 4806 scope.go:117] "RemoveContainer" containerID="ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7" Nov 25 15:57:19 crc kubenswrapper[4806]: E1125 15:57:19.191686 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.195962 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" exitCode=0 Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.196065 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de"} Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.201110 4806 generic.go:334] "Generic (PLEG): container finished" podID="c1159ae9-b734-4012-b746-35d037ee4817" containerID="b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33" exitCode=1 Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.201196 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerDied","Data":"b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33"} Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.201911 4806 scope.go:117] "RemoveContainer" containerID="b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33" Nov 25 15:57:19 crc kubenswrapper[4806]: E1125 15:57:19.202163 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.217662 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.261397 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.315627 4806 scope.go:117] "RemoveContainer" containerID="879ed2685760d893a00db6f9136d22093b915cafa45b3789e7c9724bba0ce08e" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.343812 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zkvbv" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.348637 4806 scope.go:117] "RemoveContainer" containerID="7e97e1d8a07bf7be533911c49b4fec4cfcd4c18c9108c8cf5d2cae7baf9a4ee6" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.433434 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 15:57:19 crc kubenswrapper[4806]: E1125 15:57:19.567793 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.614287 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cvks2" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.662046 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.694433 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.777127 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-rz9fn" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.921998 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.922905 4806 scope.go:117] "RemoveContainer" containerID="fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.971276 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 15:57:19 crc kubenswrapper[4806]: I1125 15:57:19.976841 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.145126 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.213624 4806 generic.go:334] "Generic (PLEG): container finished" podID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" containerID="7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0" exitCode=1 Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.214151 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerDied","Data":"7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.214252 4806 scope.go:117] "RemoveContainer" containerID="47ce2dbf6b9fad4dbf4a26373eb1de3e2b150b92b3b8a6d28468cccaa5b03d7b" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.215533 4806 scope.go:117] "RemoveContainer" containerID="7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.215919 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.217074 4806 generic.go:334] "Generic (PLEG): container finished" podID="023302d1-a345-4f55-9ac1-4a2b674e36aa" containerID="e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b" exitCode=1 Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.217212 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerDied","Data":"e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.218402 4806 scope.go:117] "RemoveContainer" containerID="e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.218805 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.220130 4806 generic.go:334] "Generic (PLEG): container finished" podID="63efe3dc-03df-4494-9661-9a23a89c0974" containerID="40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6" exitCode=1 Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.220272 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerDied","Data":"40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.220772 4806 scope.go:117] "RemoveContainer" containerID="40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.221113 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.223629 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.224012 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.232006 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.236058 4806 generic.go:334] "Generic (PLEG): container finished" podID="1df7970b-bed8-4e27-b04b-66e513683875" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" exitCode=1 Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.236266 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerDied","Data":"f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.237111 4806 scope.go:117] "RemoveContainer" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.237554 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.248909 4806 generic.go:334] "Generic (PLEG): container finished" podID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" containerID="b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b" exitCode=1 Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.248965 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerDied","Data":"b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b"} Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.249847 4806 scope.go:117] "RemoveContainer" containerID="b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b" Nov 25 15:57:20 crc kubenswrapper[4806]: E1125 15:57:20.250198 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.298192 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.305184 4806 scope.go:117] "RemoveContainer" containerID="a66a2e8628b3b3e71a98d587732c39b82d714af6cf9a05c19630a45b9be4b894" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.305217 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.327774 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.341435 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.392908 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.397971 4806 scope.go:117] "RemoveContainer" containerID="8fef9d6a1cd1a8b83d21f8b18544ad6d89480ecf4bb608db94fbc1369a5cdb56" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.437660 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.523228 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.532602 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.538502 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.590650 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zj8g8" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.599276 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-g2qnn" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.599949 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.614112 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.635255 4806 scope.go:117] "RemoveContainer" containerID="2b783fc6a83fa6d426891cb44501d7686eb0660d8f45f02a83e1048e7a280f7a" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.697010 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.710244 4806 scope.go:117] "RemoveContainer" containerID="0762b4028bf70ccb9304d2fd00a97a0b41ff1469ec2e50b95125a3b74a4bbe98" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.768065 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.812218 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.812277 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.822282 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.844134 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.908498 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.963036 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 15:57:20 crc kubenswrapper[4806]: I1125 15:57:20.979512 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.011496 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.012116 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.069434 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-hwn8l" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.073820 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.106934 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.135852 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.154052 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-x5qq4" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.203498 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.221025 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.269887 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.275018 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.277243 4806 scope.go:117] "RemoveContainer" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" Nov 25 15:57:21 crc kubenswrapper[4806]: E1125 15:57:21.277649 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.288605 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.289051 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.291962 4806 generic.go:334] "Generic (PLEG): container finished" podID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" exitCode=1 Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.292044 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerDied","Data":"dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d"} Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.292095 4806 scope.go:117] "RemoveContainer" containerID="fed496ee542004e7022f18e409a386bbec2d6b6e5c766055066243555557b699" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.292733 4806 scope.go:117] "RemoveContainer" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" Nov 25 15:57:21 crc kubenswrapper[4806]: E1125 15:57:21.293140 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.314880 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.474464 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.494380 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.550694 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.565351 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.571934 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.627690 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.628187 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.633851 4806 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.652779 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.663415 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fsfg7" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.678648 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.731779 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.738757 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.750460 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.782863 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.845155 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.901078 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.920021 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.964541 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Nov 25 15:57:21 crc kubenswrapper[4806]: I1125 15:57:21.991060 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.028153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-snqq2" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.099900 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-ljrhl" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.116084 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.119153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.175660 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.187587 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.187644 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.188388 4806 scope.go:117] "RemoveContainer" containerID="709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706" Nov 25 15:57:22 crc kubenswrapper[4806]: E1125 15:57:22.188632 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-state-metrics pod=kube-state-metrics-0_openstack(9c050b95-eb84-4171-a52c-ee1e4614c301)\"" pod="openstack/kube-state-metrics-0" podUID="9c050b95-eb84-4171-a52c-ee1e4614c301" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.198084 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.217696 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.235347 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.246106 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2r2q"/"openshift-service-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.254559 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.268051 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-sblq5" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.308291 4806 scope.go:117] "RemoveContainer" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" Nov 25 15:57:22 crc kubenswrapper[4806]: E1125 15:57:22.308564 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.325293 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.339931 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.340817 4806 scope.go:117] "RemoveContainer" containerID="33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f" Nov 25 15:57:22 crc kubenswrapper[4806]: E1125 15:57:22.341108 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-769f4c6fc-r7k57_metallb-system(55283d70-ea30-4f51-8583-6d1adc92cfcb)\"" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.344031 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.357364 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.360590 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.403432 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.470659 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.486503 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.509528 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.510492 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.537659 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.538260 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.588072 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fjrfv" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.624476 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.655940 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.668306 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.703191 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.704230 4806 scope.go:117] "RemoveContainer" containerID="ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.712143 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.831030 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.853851 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.861247 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.900927 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.913555 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.918832 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.942165 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.981122 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 15:57:22 crc kubenswrapper[4806]: I1125 15:57:22.989511 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.000090 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.009493 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.016142 4806 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.033128 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.120885 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.128457 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.135029 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.178526 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.191553 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.204185 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-q54pm" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.219122 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.234531 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.250257 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.286292 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.286602 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.287489 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-ppbgp" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.288888 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.320965 4806 generic.go:334] "Generic (PLEG): container finished" podID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" containerID="f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6" exitCode=1 Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.321011 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerDied","Data":"f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6"} Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.321065 4806 scope.go:117] "RemoveContainer" containerID="ded0d9c74e9d3eb143c8fecfdf74a1beb31495c1b8cc25bcdbb8637fb2d4b19f" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.321882 4806 scope.go:117] "RemoveContainer" containerID="f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6" Nov 25 15:57:23 crc kubenswrapper[4806]: E1125 15:57:23.322613 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7c468db9ff-2r8gr_openstack-operators(b97ff802-8b8f-47d4-bff1-7d6876f780ff)\"" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podUID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.330467 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.367938 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.380757 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.398430 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.412899 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.424131 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.447153 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.516176 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.545701 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.550237 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-q4nb7" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.564995 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.589156 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.589438 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.592583 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.615649 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.632905 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.672851 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.678874 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.733646 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.784460 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.799665 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.817229 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-dcx9r" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.851909 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.908173 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-95vcl" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.908459 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.923713 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.939182 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.951162 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Nov 25 15:57:23 crc kubenswrapper[4806]: I1125 15:57:23.978393 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.009890 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.010371 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.022529 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-8vrnm" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.085177 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.147650 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.194099 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.194755 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.203306 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.229434 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-hhdgn" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.278234 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.347880 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.402630 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.409711 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.427585 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-779bfcf6cb-zxvzf" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.436114 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.458767 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.460913 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.492497 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.521490 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.523835 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.550677 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.555194 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.591199 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.608406 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.630285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-h78l8" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.637253 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.673548 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.680456 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.715068 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.732551 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.761868 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.797481 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.797583 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.801875 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.808585 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.909253 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 15:57:24 crc kubenswrapper[4806]: I1125 15:57:24.945251 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.037210 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.052436 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-hflzm" Nov 25 15:57:25 crc kubenswrapper[4806]: E1125 15:57:25.053243 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache]" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.063551 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.072632 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.085446 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.089341 4806 scope.go:117] "RemoveContainer" containerID="2d0dc6bee41ecdaf4e2ae149c6becb1ef27f42826af6f68b4281004329c220ba" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.118409 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.130348 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.141969 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.152626 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.166007 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.187972 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.194622 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.246924 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.247768 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.249374 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.301936 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8vvgg" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.311834 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.347775 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.349107 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-k6wwk" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.368689 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.402058 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.440485 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.461107 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.467190 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.492509 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-z2r2q"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.558748 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.590672 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.590929 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.599633 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-44twg" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.621429 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.626879 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.642654 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.651150 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.659025 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mfsqf" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.668271 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.688030 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.697414 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.708146 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.708203 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.713397 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.747356 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.755669 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.759501 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.801707 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.850599 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.859571 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.887806 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.923398 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.924989 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" podUID="95b3b0c2-b552-4f25-803e-f2ae9d53add8" containerName="cert-manager-controller" probeResult="failure" output="Get \"http://10.217.0.44:9403/livez\": dial tcp 10.217.0.44:9403: connect: connection refused" Nov 25 15:57:25 crc kubenswrapper[4806]: I1125 15:57:25.971279 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.026564 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.066120 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.175424 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.183217 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.213550 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.231512 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.235848 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.240089 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.264526 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.354275 4806 generic.go:334] "Generic (PLEG): container finished" podID="95b3b0c2-b552-4f25-803e-f2ae9d53add8" containerID="bdbdfaca6eed81ac9dce8cb120b37747934ca2b43695af56c882cbcdf9ee0b96" exitCode=1 Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.354349 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" event={"ID":"95b3b0c2-b552-4f25-803e-f2ae9d53add8","Type":"ContainerDied","Data":"bdbdfaca6eed81ac9dce8cb120b37747934ca2b43695af56c882cbcdf9ee0b96"} Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.355149 4806 scope.go:117] "RemoveContainer" containerID="bdbdfaca6eed81ac9dce8cb120b37747934ca2b43695af56c882cbcdf9ee0b96" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.365076 4806 generic.go:334] "Generic (PLEG): container finished" podID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" containerID="2ab8865e8b8127237d87e6953ccdd7156812fdafbb899703e150b3e01fa7955e" exitCode=1 Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.365139 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerDied","Data":"2ab8865e8b8127237d87e6953ccdd7156812fdafbb899703e150b3e01fa7955e"} Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.365175 4806 scope.go:117] "RemoveContainer" containerID="2d0dc6bee41ecdaf4e2ae149c6becb1ef27f42826af6f68b4281004329c220ba" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.365982 4806 scope.go:117] "RemoveContainer" containerID="2ab8865e8b8127237d87e6953ccdd7156812fdafbb899703e150b3e01fa7955e" Nov 25 15:57:26 crc kubenswrapper[4806]: E1125 15:57:26.366258 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-2snr9_openstack-operators(fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" podUID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.377791 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.379577 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.379735 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.418958 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.419967 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.437937 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.446206 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.460608 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.525890 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.601522 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.648931 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.650970 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.659436 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ztcj7" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.661570 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.791830 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-d678x" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.804289 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.864608 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.867097 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-dqwtc" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.876865 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.892189 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.930715 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-s7t8r" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.957876 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.958200 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 15:57:26 crc kubenswrapper[4806]: I1125 15:57:26.996892 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.017783 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.019435 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.052553 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.069149 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.072404 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.083361 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-7rs57" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.148986 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.154609 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.178826 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qlfgw" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.208797 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.266903 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4tj5m" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.314653 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.327241 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.384062 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-2nhx4" event={"ID":"95b3b0c2-b552-4f25-803e-f2ae9d53add8","Type":"ContainerStarted","Data":"c0467606991a1c2bd331a7817cb58f64042b2581726d046a5a5e7b2a1223b91d"} Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.397981 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.420616 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.421301 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.436723 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.460835 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-66pxf" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.469501 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.481410 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.491071 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.503457 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.516204 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.526023 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.615395 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-vf5g4" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.655038 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.682285 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.694588 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.736668 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.736672 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.749048 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.779632 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.811891 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.832571 4806 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fd226" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.848806 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.862814 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.864363 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.891632 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.903982 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.970725 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.987490 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 15:57:27 crc kubenswrapper[4806]: I1125 15:57:27.991968 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.018053 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.023071 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.039415 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9rljq" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.145488 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.173670 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.187119 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.211788 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.234095 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.256924 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.265153 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.266157 4806 scope.go:117] "RemoveContainer" containerID="ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.266512 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.266991 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.339116 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.346688 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-8x9zw" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.367342 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.436844 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.447605 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.452616 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.469019 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-2rdn6" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.479230 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.494419 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-nmg8l" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.496577 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.506484 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-68694" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.541425 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.556096 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zrrhc" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.559977 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.566472 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.571152 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mjhmx" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.596853 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.597635 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.611208 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.613991 4806 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-s7h4x" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.615961 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.617283 4806 scope.go:117] "RemoveContainer" containerID="b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.617572 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.654610 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.655681 4806 scope.go:117] "RemoveContainer" containerID="7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.656021 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.664034 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.664922 4806 scope.go:117] "RemoveContainer" containerID="40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.665211 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.694424 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.695239 4806 scope.go:117] "RemoveContainer" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.695573 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.698804 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.745683 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.767516 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.769293 4806 scope.go:117] "RemoveContainer" containerID="b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.769753 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.834856 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.850927 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.858807 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.859849 4806 scope.go:117] "RemoveContainer" containerID="e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b" Nov 25 15:57:28 crc kubenswrapper[4806]: E1125 15:57:28.860138 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.887589 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.920935 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.937674 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.956732 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.971948 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 15:57:28 crc kubenswrapper[4806]: I1125 15:57:28.993059 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nrvl8" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.002375 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.055125 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.081606 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.089353 4806 scope.go:117] "RemoveContainer" containerID="94228597d270f083de50b77776d8c30f33c95195d80f6b3ad22ce1dd2023f5eb" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.089781 4806 scope.go:117] "RemoveContainer" containerID="d68b47941f6bbd54640d8dfae0bef09051cb12cfe04ddaf0c35e112599252f9f" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.124466 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.156260 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.206454 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.225777 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.243703 4806 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.265570 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.268961 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.289973 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.323746 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.360808 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.423234 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-6cfdz" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.424473 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.446460 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-797w2" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.452667 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.457700 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.464974 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.490428 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.498819 4806 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.514435 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.518275 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.518352 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.520714 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.522656 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.542701 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.542681219 podStartE2EDuration="22.542681219s" podCreationTimestamp="2025-11-25 15:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:57:29.538868091 +0000 UTC m=+3882.191010502" watchObservedRunningTime="2025-11-25 15:57:29.542681219 +0000 UTC m=+3882.194823630" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.565183 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.606860 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.684306 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.700454 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.787253 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rzc8k" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.789074 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.798895 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.848015 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.886752 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-th9t9" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.896604 4806 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.897038 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://b7103ed585d99e3a327b47baf2230d1b0d88e79840534538d4a427f89b92c797" gracePeriod=5 Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.922083 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.922853 4806 scope.go:117] "RemoveContainer" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" Nov 25 15:57:29 crc kubenswrapper[4806]: E1125 15:57:29.923128 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.926680 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.936924 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 25 15:57:29 crc kubenswrapper[4806]: I1125 15:57:29.970246 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.012449 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-ql5mb" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.088996 4806 scope.go:117] "RemoveContainer" containerID="3742c475ebc15a02f48441bd4833229bbd2fd580dce69600e04bf1e60d1f4709" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.089286 4806 scope.go:117] "RemoveContainer" containerID="09aebda378fd6b0ae2f963e4701ba32e50017dde50d7ed6cff7151dd7deff37d" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.089352 4806 scope.go:117] "RemoveContainer" containerID="5068f7ea323d2eb3e41af7fb9981aacbc59e3750b5ab70fd051f5a0c7a02ed40" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.112151 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-jmnzj" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.134881 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.175861 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.188522 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.189479 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.222392 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.241141 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.315233 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.320861 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-5p6jp" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.336507 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.378943 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.384847 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.388105 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-t9sgb" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.417003 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" event={"ID":"fbf78fa8-8b88-454e-a7dc-0e75f463bc45","Type":"ContainerStarted","Data":"ced054e4bf4ac65a41e3cad652556305782b64de86f8904c83e9424cef5bc548"} Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.417227 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.423583 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" event={"ID":"461ceb26-b86c-4bb8-9550-131351dfa3e5","Type":"ContainerStarted","Data":"1ffd9a52d9bee9dcb054a8f12a50dac8f6f09d1cdd49879838c65476b869ff29"} Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.423811 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.441028 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" event={"ID":"d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7","Type":"ContainerStarted","Data":"ab7946cafee31064f099c67d86197c39a592c59b8af6f183ccbbb510263597e1"} Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.442238 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.449282 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" event={"ID":"40a580de-1093-4adc-a98c-e18202bee9e3","Type":"ContainerStarted","Data":"3238332c31e59a7e9c0e8f5b1e096da88d5bc2c77d0e8ea7516abd56dbe2ea00"} Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.450167 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.457190 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" event={"ID":"537dc134-0732-4dfc-b0be-9c16d3d191be","Type":"ContainerStarted","Data":"570da94ee5eeef278a9b25128ef99c246f06eb8a38381efe6859ba44ad109d01"} Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.458403 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.500516 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.513755 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.529070 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.546609 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.557425 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.580816 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.593793 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.619569 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.619821 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.622059 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bqsxx" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.629105 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.657721 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.698935 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.699810 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.733747 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.819762 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.829745 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.868532 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 15:57:30 crc kubenswrapper[4806]: I1125 15:57:30.966391 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.088779 4806 scope.go:117] "RemoveContainer" containerID="8a64d11a5b2d6a0ca8cc5e8c3736b13970f7e38cc4a3e25ed8fd70be2e5b4528" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.089507 4806 scope.go:117] "RemoveContainer" containerID="2f385426d22149016a8a8f0eb60e7ec1d7a446aa945f4d341154983de4cf6df1" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.137867 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.231899 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.366722 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.387729 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-74svh" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.459166 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.480166 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" event={"ID":"ec8a3bcc-2127-44bc-8f89-db3ece24a9b9","Type":"ContainerStarted","Data":"5e4fd9b4360a0f54457c969e60b267051649dac216d4ced2fc1873db197d9e58"} Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.480447 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.484421 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" event={"ID":"2a080dd6-0904-4756-8b02-39d10465fea2","Type":"ContainerStarted","Data":"c60fccf971f7627589b8afce003f6d89fd565b1ebb8335aeb50aa730da035605"} Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.485175 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.505081 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.521074 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.524030 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4t4gc" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.526604 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.532977 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.542596 4806 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.584507 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.620883 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-n99d9" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.649760 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.663282 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-ckwfj" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.748801 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.782855 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 15:57:31 crc kubenswrapper[4806]: I1125 15:57:31.818411 4806 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-t7ffj" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.005177 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.046767 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.099989 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.123876 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.142447 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.170700 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.185888 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.294712 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.319302 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2w5hq" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.333305 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-662zk" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.431094 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.495605 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-trp2w" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.505054 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.512018 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.558336 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.696536 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-vtbm8" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.707508 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.708250 4806 scope.go:117] "RemoveContainer" containerID="f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6" Nov 25 15:57:32 crc kubenswrapper[4806]: E1125 15:57:32.708613 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7c468db9ff-2r8gr_openstack-operators(b97ff802-8b8f-47d4-bff1-7d6876f780ff)\"" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podUID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.743822 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.837862 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.862647 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 15:57:32 crc kubenswrapper[4806]: I1125 15:57:32.888543 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-jhnjx" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.045145 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.089570 4806 scope.go:117] "RemoveContainer" containerID="d5c4a4936135aeb5fab162aa80bb1cd85f57a83218a1d7192a2e5e62b980aff0" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.089886 4806 scope.go:117] "RemoveContainer" containerID="ee4d08abab1052444c8e7eb2608e0079e952644420ef096d14150cd6b35ec357" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.090265 4806 scope.go:117] "RemoveContainer" containerID="a7270c81b5343dbdd2ead73eb2d56f83232536271072a95208b9300eadfe6b26" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.504452 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" event={"ID":"8294cfe0-6c14-49bc-bd5b-d614a68893ce","Type":"ContainerStarted","Data":"e842ee4f4f0f2ce0c165983a7d3340589c48968fc025e076979f5d5ba508bf63"} Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.504715 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.507926 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" event={"ID":"dbedcc0b-12de-4497-a9f3-a9df6c88a74f","Type":"ContainerStarted","Data":"9071a74372c1d9c727de36a978f613c4576c7eecb84e51262ee50630f34b664f"} Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.508264 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.510279 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" event={"ID":"de253966-f7ff-485f-8108-b8ee0fd795bf","Type":"ContainerStarted","Data":"d13df960385336eb54c3674e079026213fabf605cfbe89d1ceac42a2654a8dba"} Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.510935 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.888236 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 15:57:33 crc kubenswrapper[4806]: I1125 15:57:33.959905 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.015733 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.089967 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.090059 4806 scope.go:117] "RemoveContainer" containerID="de848d22362879624289f9ecee22fcc0b2cb858214ed26668955fd7be3bc2e4d" Nov 25 15:57:34 crc kubenswrapper[4806]: E1125 15:57:34.090290 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.119719 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.391354 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.534554 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" event={"ID":"9cc0ebc5-e3d4-4bae-8b33-032d950705ff","Type":"ContainerStarted","Data":"1f9620d8d83a919888d7149dffe6567ed06dad9d4759f76fd1afda471e7b9ebb"} Nov 25 15:57:34 crc kubenswrapper[4806]: I1125 15:57:34.535180 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:57:35 crc kubenswrapper[4806]: E1125 15:57:35.336731 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache]" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.560805 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.560859 4806 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="b7103ed585d99e3a327b47baf2230d1b0d88e79840534538d4a427f89b92c797" exitCode=137 Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.560974 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbe87b149d8581b10b61d3ea03b7e6fc824b87877b688450416cbf28d0a7cb12" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.607278 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.607388 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763124 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763464 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763249 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763586 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763616 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763624 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763649 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763666 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.763673 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.764290 4806 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.764333 4806 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.764347 4806 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.764358 4806 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.771359 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:57:35 crc kubenswrapper[4806]: I1125 15:57:35.866140 4806 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.089786 4806 scope.go:117] "RemoveContainer" containerID="709e33fd89647016ae3b26ded0666c8ac5171b08b8ba93b79e4d63126b281706" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.108460 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.108742 4806 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.127613 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.127650 4806 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c7e1c5f2-c919-457b-b510-2c6bca345fc3" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.138974 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.139220 4806 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c7e1c5f2-c919-457b-b510-2c6bca345fc3" Nov 25 15:57:36 crc kubenswrapper[4806]: I1125 15:57:36.575134 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 15:57:37 crc kubenswrapper[4806]: I1125 15:57:37.089530 4806 scope.go:117] "RemoveContainer" containerID="33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f" Nov 25 15:57:37 crc kubenswrapper[4806]: E1125 15:57:37.089864 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=metallb-operator-controller-manager-769f4c6fc-r7k57_metallb-system(55283d70-ea30-4f51-8583-6d1adc92cfcb)\"" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" podUID="55283d70-ea30-4f51-8583-6d1adc92cfcb" Nov 25 15:57:37 crc kubenswrapper[4806]: I1125 15:57:37.585889 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9c050b95-eb84-4171-a52c-ee1e4614c301","Type":"ContainerStarted","Data":"abefab7789eeacaf1f17472b4c15ef0a5d408f04413a4381add47f86055a4a26"} Nov 25 15:57:37 crc kubenswrapper[4806]: I1125 15:57:37.587556 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:57:37 crc kubenswrapper[4806]: I1125 15:57:37.946801 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-w6686" Nov 25 15:57:37 crc kubenswrapper[4806]: I1125 15:57:37.948468 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-qk9m2" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.118064 4806 scope.go:117] "RemoveContainer" containerID="2ab8865e8b8127237d87e6953ccdd7156812fdafbb899703e150b3e01fa7955e" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.118880 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=operator pod=rabbitmq-cluster-operator-manager-668c99d594-2snr9_openstack-operators(fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30)\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" podUID="fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.124703 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r8dnj" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.249777 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-q6z52" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.264272 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.268970 4806 scope.go:117] "RemoveContainer" containerID="ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.269277 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=keystone-operator-controller-manager-748dc6576f-w5r5m_openstack-operators(61457634-dc4d-4ad9-9bdc-c95aae5df022)\"" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" podUID="61457634-dc4d-4ad9-9bdc-c95aae5df022" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.378668 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-jcrbm" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.392871 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-h9qg8" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.615733 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.616774 4806 scope.go:117] "RemoveContainer" containerID="b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.617485 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=mariadb-operator-controller-manager-cb6c4fdb7-9thxp_openstack-operators(c1159ae9-b734-4012-b746-35d037ee4817)\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" podUID="c1159ae9-b734-4012-b746-35d037ee4817" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.644669 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-c5xhr" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.654031 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.655521 4806 scope.go:117] "RemoveContainer" containerID="7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.656091 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=ovn-operator-controller-manager-66cf5c67ff-tzsbk_openstack-operators(9dc1bbe2-49c1-4601-9acf-b1887426fdd0)\"" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" podUID="9dc1bbe2-49c1-4601-9acf-b1887426fdd0" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.663668 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.664629 4806 scope.go:117] "RemoveContainer" containerID="40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.664961 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=nova-operator-controller-manager-79556f57fc-wfhhn_openstack-operators(63efe3dc-03df-4494-9661-9a23a89c0974)\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" podUID="63efe3dc-03df-4494-9661-9a23a89c0974" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.692718 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.693576 4806 scope.go:117] "RemoveContainer" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.693898 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=swift-operator-controller-manager-6fdc4fcf86-pxx5w_openstack-operators(1df7970b-bed8-4e27-b04b-66e513683875)\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" podUID="1df7970b-bed8-4e27-b04b-66e513683875" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.707782 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-cqwgq" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.766695 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.767479 4806 scope.go:117] "RemoveContainer" containerID="b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.767827 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=placement-operator-controller-manager-5db546f9d9-fxzwv_openstack-operators(24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b)\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" podUID="24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.779930 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-687f46fc78-xdmx6" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.859351 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:38 crc kubenswrapper[4806]: I1125 15:57:38.860031 4806 scope.go:117] "RemoveContainer" containerID="e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b" Nov 25 15:57:38 crc kubenswrapper[4806]: E1125 15:57:38.860327 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=watcher-operator-controller-manager-864885998-b7g79_openstack-operators(023302d1-a345-4f55-9ac1-4a2b674e36aa)\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" podUID="023302d1-a345-4f55-9ac1-4a2b674e36aa" Nov 25 15:57:39 crc kubenswrapper[4806]: I1125 15:57:39.922708 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:57:39 crc kubenswrapper[4806]: I1125 15:57:39.924041 4806 scope.go:117] "RemoveContainer" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" Nov 25 15:57:39 crc kubenswrapper[4806]: E1125 15:57:39.924452 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=infra-operator-controller-manager-d5cc86f4b-xlzgr_openstack-operators(e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329)\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" podUID="e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329" Nov 25 15:57:42 crc kubenswrapper[4806]: I1125 15:57:42.194224 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 15:57:42 crc kubenswrapper[4806]: I1125 15:57:42.703297 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:57:42 crc kubenswrapper[4806]: I1125 15:57:42.704394 4806 scope.go:117] "RemoveContainer" containerID="f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6" Nov 25 15:57:42 crc kubenswrapper[4806]: E1125 15:57:42.704769 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=manager pod=openstack-operator-controller-manager-7c468db9ff-2r8gr_openstack-operators(b97ff802-8b8f-47d4-bff1-7d6876f780ff)\"" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" podUID="b97ff802-8b8f-47d4-bff1-7d6876f780ff" Nov 25 15:57:45 crc kubenswrapper[4806]: E1125 15:57:45.641037 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-d070ff8a0e078f9372ecb12bac3ec19cc5d72391f9bc0097b42da7a739859c2a\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:57:46 crc kubenswrapper[4806]: I1125 15:57:46.090100 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:57:46 crc kubenswrapper[4806]: E1125 15:57:46.090521 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:57:47 crc kubenswrapper[4806]: I1125 15:57:47.979391 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-wfsxk" Nov 25 15:57:48 crc kubenswrapper[4806]: I1125 15:57:48.573179 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-bwwh4" Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.089771 4806 scope.go:117] "RemoveContainer" containerID="2ab8865e8b8127237d87e6953ccdd7156812fdafbb899703e150b3e01fa7955e" Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.090066 4806 scope.go:117] "RemoveContainer" containerID="33b945f9bd82c80b96ff33763e1c7a4f84a186f6b6be3f7f0dd016e16773b89f" Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.090211 4806 scope.go:117] "RemoveContainer" containerID="f193385bc8a4e67262485f2ee1db74c473e18b8f7008bda48f8817b0e2277403" Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.737653 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" event={"ID":"1df7970b-bed8-4e27-b04b-66e513683875","Type":"ContainerStarted","Data":"773b71e4e0f2ea076e2aff2e607efaef68ca7d654ad06df246155646f6853bc0"} Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.741059 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.745853 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2snr9" event={"ID":"fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30","Type":"ContainerStarted","Data":"1ab4c2d31a4736a89fa291e2e285f29151b8b2d1839ca4927d383d3fae2eea22"} Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.747995 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" event={"ID":"55283d70-ea30-4f51-8583-6d1adc92cfcb","Type":"ContainerStarted","Data":"7ac7cee5770bc965ed50c0a818b287503f8cabf4207401e1f4c880b0b51b5a63"} Nov 25 15:57:49 crc kubenswrapper[4806]: I1125 15:57:49.749114 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:57:50 crc kubenswrapper[4806]: I1125 15:57:50.090011 4806 scope.go:117] "RemoveContainer" containerID="7bfef2d3a9b2307db35876e528eb5fcea5b7cfe83531bdfb4dc3191a571884f0" Nov 25 15:57:50 crc kubenswrapper[4806]: I1125 15:57:50.763206 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" event={"ID":"9dc1bbe2-49c1-4601-9acf-b1887426fdd0","Type":"ContainerStarted","Data":"5e06378c260dec49ca24b6bafd07f4da82aabb6eb9a82b713c49f38fd2dadcfe"} Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.089620 4806 scope.go:117] "RemoveContainer" containerID="dcc4fb86ad4fec7ffef987e0fed0a10219b36cb0845cca1697c877bad435209d" Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.091063 4806 scope.go:117] "RemoveContainer" containerID="b8a5cb5a7384bd7de4b4cc412c81a4c19158208223718a909cb442eacff59e33" Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.472431 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.804113 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" event={"ID":"c1159ae9-b734-4012-b746-35d037ee4817","Type":"ContainerStarted","Data":"c05a8fb9511074f090f2694c24066c894d40c0717b3aa78454dc9663447189b1"} Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.805385 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.807844 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" event={"ID":"e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329","Type":"ContainerStarted","Data":"523e5eeac944be9a2e6ddbec06b2d5042736ea100a71e75882a4aa0ac438cb1e"} Nov 25 15:57:52 crc kubenswrapper[4806]: I1125 15:57:52.808099 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.089174 4806 scope.go:117] "RemoveContainer" containerID="ed3192fe8dae586b4225b175147205d19a5fc67eaf6b7c7c195445f6ea2359b7" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.089576 4806 scope.go:117] "RemoveContainer" containerID="40d27c0276ee0546fc9e2d8a81ad157af601fb3113c6a43ad5c7099cfcb507d6" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.090026 4806 scope.go:117] "RemoveContainer" containerID="e054749440016b0130fa13fa97a9513b41218942d1d35c7fa5b402139316ef7b" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.090266 4806 scope.go:117] "RemoveContainer" containerID="b40c6f0cfa00c9968f7e69e6b1142e1074ed645b77bd580961742b505609ab3b" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.819140 4806 generic.go:334] "Generic (PLEG): container finished" podID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerID="f6d36f284fa1650c5094c401a8d483a728514124d1813bdc4bef6113669223ec" exitCode=0 Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.819183 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" event={"ID":"f39c0ac4-6fb1-4d27-adfc-230efd634178","Type":"ContainerDied","Data":"f6d36f284fa1650c5094c401a8d483a728514124d1813bdc4bef6113669223ec"} Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.820095 4806 scope.go:117] "RemoveContainer" containerID="f6d36f284fa1650c5094c401a8d483a728514124d1813bdc4bef6113669223ec" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.822148 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" event={"ID":"24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b","Type":"ContainerStarted","Data":"95ae79b568de2660c55a638253f9fff4d726f1aed52ae0ae3dbb215dea47029b"} Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.822466 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.824391 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" event={"ID":"61457634-dc4d-4ad9-9bdc-c95aae5df022","Type":"ContainerStarted","Data":"a4fd718ad3c9903398750f86cd0cb91a6cab5ce6c1116a7ec80c93cd52b2c4b7"} Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.824689 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.826974 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" event={"ID":"023302d1-a345-4f55-9ac1-4a2b674e36aa","Type":"ContainerStarted","Data":"6e36a196abeeb9a815d978f12a901599c8f65230d8a31bc370901663c025074a"} Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.827330 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:53 crc kubenswrapper[4806]: I1125 15:57:53.829055 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" event={"ID":"63efe3dc-03df-4494-9661-9a23a89c0974","Type":"ContainerStarted","Data":"5d28f4b27bd878da45c4e45c6891b7f62afb869edb749e2356be4acc28762dc5"} Nov 25 15:57:56 crc kubenswrapper[4806]: I1125 15:57:56.089947 4806 scope.go:117] "RemoveContainer" containerID="f55117797073ae1c749ef1582c04bf1b2df6ce5c0ca4d89a2bd2589e08a8a2a6" Nov 25 15:57:56 crc kubenswrapper[4806]: I1125 15:57:56.857047 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" event={"ID":"b97ff802-8b8f-47d4-bff1-7d6876f780ff","Type":"ContainerStarted","Data":"96bb76b9b4504f6bbe52b0739a5b0f4d0bc2eb313712a9cf381039def8a1de8d"} Nov 25 15:57:56 crc kubenswrapper[4806]: I1125 15:57:56.857710 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:57:57 crc kubenswrapper[4806]: I1125 15:57:57.089942 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:57:57 crc kubenswrapper[4806]: E1125 15:57:57.090432 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:57:57 crc kubenswrapper[4806]: I1125 15:57:57.516260 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2r2q_must-gather-gfgnl_f39c0ac4-6fb1-4d27-adfc-230efd634178/gather/0.log" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.268450 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-w5r5m" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.540965 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.618702 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-9thxp" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.654410 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.668582 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-tzsbk" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.669007 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.672615 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-wfhhn" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.707659 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-pxx5w" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.769350 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-fxzwv" Nov 25 15:57:58 crc kubenswrapper[4806]: I1125 15:57:58.860893 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-b7g79" Nov 25 15:57:59 crc kubenswrapper[4806]: I1125 15:57:59.928144 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-xlzgr" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.191260 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:01 crc kubenswrapper[4806]: E1125 15:58:01.192769 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" containerName="installer" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.192866 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" containerName="installer" Nov 25 15:58:01 crc kubenswrapper[4806]: E1125 15:58:01.192954 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.193005 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.193278 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae73747-62e9-4046-99b6-3ed9145be32b" containerName="installer" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.193386 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.195125 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.208117 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.256683 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.256758 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfkkx\" (UniqueName: \"kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.256901 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.358222 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.358392 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.358486 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfkkx\" (UniqueName: \"kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.359157 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.359238 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.380173 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfkkx\" (UniqueName: \"kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx\") pod \"community-operators-gcgvj\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:01 crc kubenswrapper[4806]: I1125 15:58:01.532775 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:02 crc kubenswrapper[4806]: I1125 15:58:02.055746 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:02 crc kubenswrapper[4806]: I1125 15:58:02.707750 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7c468db9ff-2r8gr" Nov 25 15:58:02 crc kubenswrapper[4806]: I1125 15:58:02.924724 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerID="4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3" exitCode=0 Nov 25 15:58:02 crc kubenswrapper[4806]: I1125 15:58:02.924977 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerDied","Data":"4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3"} Nov 25 15:58:02 crc kubenswrapper[4806]: I1125 15:58:02.925003 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerStarted","Data":"6273713c123c55a7cfd99f45c55b09746b66a05154bcd7815b3df818965a17f7"} Nov 25 15:58:03 crc kubenswrapper[4806]: I1125 15:58:03.936837 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerStarted","Data":"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513"} Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.461356 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-z2r2q/must-gather-gfgnl"] Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.461994 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="copy" containerID="cri-o://a23f99ad27cbfa6738f7222776577df1f2bb2049d08451204af04f2d3a37a243" gracePeriod=2 Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.474587 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-z2r2q/must-gather-gfgnl"] Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.990132 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2r2q_must-gather-gfgnl_f39c0ac4-6fb1-4d27-adfc-230efd634178/copy/0.log" Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.990757 4806 generic.go:334] "Generic (PLEG): container finished" podID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerID="a23f99ad27cbfa6738f7222776577df1f2bb2049d08451204af04f2d3a37a243" exitCode=143 Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.993974 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerID="b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513" exitCode=0 Nov 25 15:58:05 crc kubenswrapper[4806]: I1125 15:58:05.994014 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerDied","Data":"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513"} Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.149775 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2r2q_must-gather-gfgnl_f39c0ac4-6fb1-4d27-adfc-230efd634178/copy/0.log" Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.150459 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.176009 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output\") pod \"f39c0ac4-6fb1-4d27-adfc-230efd634178\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.278309 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgqzd\" (UniqueName: \"kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd\") pod \"f39c0ac4-6fb1-4d27-adfc-230efd634178\" (UID: \"f39c0ac4-6fb1-4d27-adfc-230efd634178\") " Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.285482 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd" (OuterVolumeSpecName: "kube-api-access-xgqzd") pod "f39c0ac4-6fb1-4d27-adfc-230efd634178" (UID: "f39c0ac4-6fb1-4d27-adfc-230efd634178"). InnerVolumeSpecName "kube-api-access-xgqzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.374064 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "f39c0ac4-6fb1-4d27-adfc-230efd634178" (UID: "f39c0ac4-6fb1-4d27-adfc-230efd634178"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.380869 4806 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/f39c0ac4-6fb1-4d27-adfc-230efd634178-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:06 crc kubenswrapper[4806]: I1125 15:58:06.380912 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgqzd\" (UniqueName: \"kubernetes.io/projected/f39c0ac4-6fb1-4d27-adfc-230efd634178-kube-api-access-xgqzd\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.007208 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerStarted","Data":"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876"} Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.010404 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-z2r2q_must-gather-gfgnl_f39c0ac4-6fb1-4d27-adfc-230efd634178/copy/0.log" Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.010803 4806 scope.go:117] "RemoveContainer" containerID="a23f99ad27cbfa6738f7222776577df1f2bb2049d08451204af04f2d3a37a243" Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.010844 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-z2r2q/must-gather-gfgnl" Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.034417 4806 scope.go:117] "RemoveContainer" containerID="f6d36f284fa1650c5094c401a8d483a728514124d1813bdc4bef6113669223ec" Nov 25 15:58:07 crc kubenswrapper[4806]: I1125 15:58:07.035762 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gcgvj" podStartSLOduration=2.419097614 podStartE2EDuration="6.035742782s" podCreationTimestamp="2025-11-25 15:58:01 +0000 UTC" firstStartedPulling="2025-11-25 15:58:02.927895497 +0000 UTC m=+3915.580037908" lastFinishedPulling="2025-11-25 15:58:06.544540665 +0000 UTC m=+3919.196683076" observedRunningTime="2025-11-25 15:58:07.03216111 +0000 UTC m=+3919.684303551" watchObservedRunningTime="2025-11-25 15:58:07.035742782 +0000 UTC m=+3919.687885183" Nov 25 15:58:08 crc kubenswrapper[4806]: I1125 15:58:08.119862 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" path="/var/lib/kubelet/pods/f39c0ac4-6fb1-4d27-adfc-230efd634178/volumes" Nov 25 15:58:10 crc kubenswrapper[4806]: I1125 15:58:10.090241 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:58:10 crc kubenswrapper[4806]: E1125 15:58:10.091167 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:58:11 crc kubenswrapper[4806]: I1125 15:58:11.533064 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:11 crc kubenswrapper[4806]: I1125 15:58:11.533393 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:11 crc kubenswrapper[4806]: I1125 15:58:11.581434 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:12 crc kubenswrapper[4806]: I1125 15:58:12.700942 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:13 crc kubenswrapper[4806]: I1125 15:58:13.582926 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.081187 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gcgvj" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="registry-server" containerID="cri-o://4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876" gracePeriod=2 Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.667028 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.772145 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfkkx\" (UniqueName: \"kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx\") pod \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.772783 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities\") pod \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.772831 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content\") pod \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\" (UID: \"bd97ae10-7d33-41d0-b3a2-a7acfbccff77\") " Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.773717 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities" (OuterVolumeSpecName: "utilities") pod "bd97ae10-7d33-41d0-b3a2-a7acfbccff77" (UID: "bd97ae10-7d33-41d0-b3a2-a7acfbccff77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.780343 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx" (OuterVolumeSpecName: "kube-api-access-pfkkx") pod "bd97ae10-7d33-41d0-b3a2-a7acfbccff77" (UID: "bd97ae10-7d33-41d0-b3a2-a7acfbccff77"). InnerVolumeSpecName "kube-api-access-pfkkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.876794 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:14 crc kubenswrapper[4806]: I1125 15:58:14.876839 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfkkx\" (UniqueName: \"kubernetes.io/projected/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-kube-api-access-pfkkx\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.094449 4806 generic.go:334] "Generic (PLEG): container finished" podID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerID="4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876" exitCode=0 Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.094498 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerDied","Data":"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876"} Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.094538 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gcgvj" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.094560 4806 scope.go:117] "RemoveContainer" containerID="4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.094544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gcgvj" event={"ID":"bd97ae10-7d33-41d0-b3a2-a7acfbccff77","Type":"ContainerDied","Data":"6273713c123c55a7cfd99f45c55b09746b66a05154bcd7815b3df818965a17f7"} Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.116782 4806 scope.go:117] "RemoveContainer" containerID="b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.144223 4806 scope.go:117] "RemoveContainer" containerID="4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.186977 4806 scope.go:117] "RemoveContainer" containerID="4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.187345 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876\": container with ID starting with 4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876 not found: ID does not exist" containerID="4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.187385 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876"} err="failed to get container status \"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876\": rpc error: code = NotFound desc = could not find container \"4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876\": container with ID starting with 4107fb37279ddc2adca8968e9a96d981e49120412bac389a19ec93361f435876 not found: ID does not exist" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.187412 4806 scope.go:117] "RemoveContainer" containerID="b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.188370 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513\": container with ID starting with b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513 not found: ID does not exist" containerID="b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.188397 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513"} err="failed to get container status \"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513\": rpc error: code = NotFound desc = could not find container \"b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513\": container with ID starting with b1509cb2df3dba49a28e5d0f95ccf17647b7c0ccda924e48c7d21cbcaef4f513 not found: ID does not exist" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.188415 4806 scope.go:117] "RemoveContainer" containerID="4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.189340 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3\": container with ID starting with 4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3 not found: ID does not exist" containerID="4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.189371 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3"} err="failed to get container status \"4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3\": rpc error: code = NotFound desc = could not find container \"4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3\": container with ID starting with 4f1133acdbe67f5d58a2779b2cdf624c6726c9207960c7329063d179f58212e3 not found: ID does not exist" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.341621 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd97ae10-7d33-41d0-b3a2-a7acfbccff77" (UID: "bd97ae10-7d33-41d0-b3a2-a7acfbccff77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.391181 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd97ae10-7d33-41d0-b3a2-a7acfbccff77-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.441116 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.452600 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gcgvj"] Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.996444 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.996888 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="registry-server" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.996906 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="registry-server" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.996929 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="extract-content" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.996937 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="extract-content" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.996961 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="extract-utilities" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.996968 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="extract-utilities" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.996988 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="gather" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.996994 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="gather" Nov 25 15:58:15 crc kubenswrapper[4806]: E1125 15:58:15.997006 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="copy" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.997013 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="copy" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.997250 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" containerName="registry-server" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.997272 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="gather" Nov 25 15:58:15 crc kubenswrapper[4806]: I1125 15:58:15.997278 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f39c0ac4-6fb1-4d27-adfc-230efd634178" containerName="copy" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.006824 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.010670 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.104611 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd97ae10-7d33-41d0-b3a2-a7acfbccff77" path="/var/lib/kubelet/pods/bd97ae10-7d33-41d0-b3a2-a7acfbccff77/volumes" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.106581 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.106779 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk7bf\" (UniqueName: \"kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.106838 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.208754 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk7bf\" (UniqueName: \"kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.208856 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.209030 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.209890 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.209907 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.233873 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk7bf\" (UniqueName: \"kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf\") pod \"community-operators-4j785\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.335932 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:16 crc kubenswrapper[4806]: I1125 15:58:16.891715 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:16 crc kubenswrapper[4806]: W1125 15:58:16.904093 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a51b7f7_73f5_464a_8516_2880179cd121.slice/crio-5d5080c3bd67fc1d11eba489f4da6c3a47213ad19274b4687b71094121a5dc2f WatchSource:0}: Error finding container 5d5080c3bd67fc1d11eba489f4da6c3a47213ad19274b4687b71094121a5dc2f: Status 404 returned error can't find the container with id 5d5080c3bd67fc1d11eba489f4da6c3a47213ad19274b4687b71094121a5dc2f Nov 25 15:58:17 crc kubenswrapper[4806]: I1125 15:58:17.115736 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerStarted","Data":"5d5080c3bd67fc1d11eba489f4da6c3a47213ad19274b4687b71094121a5dc2f"} Nov 25 15:58:18 crc kubenswrapper[4806]: I1125 15:58:18.129777 4806 generic.go:334] "Generic (PLEG): container finished" podID="8a51b7f7-73f5-464a-8516-2880179cd121" containerID="3888429cfa666038e847db83ab2a99ffe1cc4196dddec478a7e401bbce27b105" exitCode=0 Nov 25 15:58:18 crc kubenswrapper[4806]: I1125 15:58:18.130020 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerDied","Data":"3888429cfa666038e847db83ab2a99ffe1cc4196dddec478a7e401bbce27b105"} Nov 25 15:58:20 crc kubenswrapper[4806]: I1125 15:58:20.152685 4806 generic.go:334] "Generic (PLEG): container finished" podID="8a51b7f7-73f5-464a-8516-2880179cd121" containerID="6de2d87c71cd20883d42fb7b44ae6da3a0bcf0b1b8f63a0ee66942ea61eac319" exitCode=0 Nov 25 15:58:20 crc kubenswrapper[4806]: I1125 15:58:20.152732 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerDied","Data":"6de2d87c71cd20883d42fb7b44ae6da3a0bcf0b1b8f63a0ee66942ea61eac319"} Nov 25 15:58:22 crc kubenswrapper[4806]: I1125 15:58:22.190055 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerStarted","Data":"b1874609c7463b36c81496ab5ad70301d5c070350210e50abec3f55d926337b5"} Nov 25 15:58:22 crc kubenswrapper[4806]: I1125 15:58:22.222914 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4j785" podStartSLOduration=4.206822033 podStartE2EDuration="7.222894089s" podCreationTimestamp="2025-11-25 15:58:15 +0000 UTC" firstStartedPulling="2025-11-25 15:58:18.131728407 +0000 UTC m=+3930.783870818" lastFinishedPulling="2025-11-25 15:58:21.147800463 +0000 UTC m=+3933.799942874" observedRunningTime="2025-11-25 15:58:22.215986332 +0000 UTC m=+3934.868128753" watchObservedRunningTime="2025-11-25 15:58:22.222894089 +0000 UTC m=+3934.875036500" Nov 25 15:58:22 crc kubenswrapper[4806]: I1125 15:58:22.341043 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-769f4c6fc-r7k57" Nov 25 15:58:23 crc kubenswrapper[4806]: I1125 15:58:23.089844 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:58:23 crc kubenswrapper[4806]: E1125 15:58:23.090445 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:58:26 crc kubenswrapper[4806]: I1125 15:58:26.337105 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:26 crc kubenswrapper[4806]: I1125 15:58:26.337881 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:26 crc kubenswrapper[4806]: I1125 15:58:26.391638 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:27 crc kubenswrapper[4806]: I1125 15:58:27.303771 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:27 crc kubenswrapper[4806]: I1125 15:58:27.365086 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:29 crc kubenswrapper[4806]: I1125 15:58:29.268825 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4j785" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="registry-server" containerID="cri-o://b1874609c7463b36c81496ab5ad70301d5c070350210e50abec3f55d926337b5" gracePeriod=2 Nov 25 15:58:30 crc kubenswrapper[4806]: I1125 15:58:30.295539 4806 generic.go:334] "Generic (PLEG): container finished" podID="8a51b7f7-73f5-464a-8516-2880179cd121" containerID="b1874609c7463b36c81496ab5ad70301d5c070350210e50abec3f55d926337b5" exitCode=0 Nov 25 15:58:30 crc kubenswrapper[4806]: I1125 15:58:30.295768 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerDied","Data":"b1874609c7463b36c81496ab5ad70301d5c070350210e50abec3f55d926337b5"} Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.163301 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.265969 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities\") pod \"8a51b7f7-73f5-464a-8516-2880179cd121\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.266234 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content\") pod \"8a51b7f7-73f5-464a-8516-2880179cd121\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.266476 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk7bf\" (UniqueName: \"kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf\") pod \"8a51b7f7-73f5-464a-8516-2880179cd121\" (UID: \"8a51b7f7-73f5-464a-8516-2880179cd121\") " Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.266841 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities" (OuterVolumeSpecName: "utilities") pod "8a51b7f7-73f5-464a-8516-2880179cd121" (UID: "8a51b7f7-73f5-464a-8516-2880179cd121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.286860 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf" (OuterVolumeSpecName: "kube-api-access-dk7bf") pod "8a51b7f7-73f5-464a-8516-2880179cd121" (UID: "8a51b7f7-73f5-464a-8516-2880179cd121"). InnerVolumeSpecName "kube-api-access-dk7bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.315832 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4j785" event={"ID":"8a51b7f7-73f5-464a-8516-2880179cd121","Type":"ContainerDied","Data":"5d5080c3bd67fc1d11eba489f4da6c3a47213ad19274b4687b71094121a5dc2f"} Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.315887 4806 scope.go:117] "RemoveContainer" containerID="b1874609c7463b36c81496ab5ad70301d5c070350210e50abec3f55d926337b5" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.316041 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4j785" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.336376 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a51b7f7-73f5-464a-8516-2880179cd121" (UID: "8a51b7f7-73f5-464a-8516-2880179cd121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.341452 4806 scope.go:117] "RemoveContainer" containerID="6de2d87c71cd20883d42fb7b44ae6da3a0bcf0b1b8f63a0ee66942ea61eac319" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.368547 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk7bf\" (UniqueName: \"kubernetes.io/projected/8a51b7f7-73f5-464a-8516-2880179cd121-kube-api-access-dk7bf\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.368573 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.368583 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a51b7f7-73f5-464a-8516-2880179cd121-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.378522 4806 scope.go:117] "RemoveContainer" containerID="3888429cfa666038e847db83ab2a99ffe1cc4196dddec478a7e401bbce27b105" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.653665 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.675767 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4j785"] Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.690253 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:58:31 crc kubenswrapper[4806]: E1125 15:58:31.690727 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="registry-server" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.690745 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="registry-server" Nov 25 15:58:31 crc kubenswrapper[4806]: E1125 15:58:31.690759 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="extract-content" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.690766 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="extract-content" Nov 25 15:58:31 crc kubenswrapper[4806]: E1125 15:58:31.690784 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="extract-utilities" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.690791 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="extract-utilities" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.690998 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" containerName="registry-server" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.692826 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.707642 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.777687 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.777779 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.777816 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx8d9\" (UniqueName: \"kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.880225 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.880287 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.880307 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx8d9\" (UniqueName: \"kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.881151 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.881379 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:31 crc kubenswrapper[4806]: I1125 15:58:31.897278 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx8d9\" (UniqueName: \"kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9\") pod \"community-operators-hg5mk\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:32 crc kubenswrapper[4806]: I1125 15:58:32.028128 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:32 crc kubenswrapper[4806]: I1125 15:58:32.102788 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a51b7f7-73f5-464a-8516-2880179cd121" path="/var/lib/kubelet/pods/8a51b7f7-73f5-464a-8516-2880179cd121/volumes" Nov 25 15:58:32 crc kubenswrapper[4806]: I1125 15:58:32.518823 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:58:33 crc kubenswrapper[4806]: I1125 15:58:33.338302 4806 generic.go:334] "Generic (PLEG): container finished" podID="20de0297-9732-4a9a-a523-33c47d423398" containerID="107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51" exitCode=0 Nov 25 15:58:33 crc kubenswrapper[4806]: I1125 15:58:33.338418 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerDied","Data":"107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51"} Nov 25 15:58:33 crc kubenswrapper[4806]: I1125 15:58:33.339100 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerStarted","Data":"4dabbd15a0df2d6ee8f94e5682f74c4c27273da7c7a5941671da1b8f75928eef"} Nov 25 15:58:36 crc kubenswrapper[4806]: I1125 15:58:36.089585 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:58:36 crc kubenswrapper[4806]: E1125 15:58:36.090467 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:58:36 crc kubenswrapper[4806]: I1125 15:58:36.371605 4806 generic.go:334] "Generic (PLEG): container finished" podID="20de0297-9732-4a9a-a523-33c47d423398" containerID="157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c" exitCode=0 Nov 25 15:58:36 crc kubenswrapper[4806]: I1125 15:58:36.371648 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerDied","Data":"157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c"} Nov 25 15:58:38 crc kubenswrapper[4806]: I1125 15:58:38.400858 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerStarted","Data":"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08"} Nov 25 15:58:38 crc kubenswrapper[4806]: I1125 15:58:38.423973 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hg5mk" podStartSLOduration=2.934504568 podStartE2EDuration="7.423956635s" podCreationTimestamp="2025-11-25 15:58:31 +0000 UTC" firstStartedPulling="2025-11-25 15:58:33.340422378 +0000 UTC m=+3945.992564789" lastFinishedPulling="2025-11-25 15:58:37.829874435 +0000 UTC m=+3950.482016856" observedRunningTime="2025-11-25 15:58:38.421528606 +0000 UTC m=+3951.073671017" watchObservedRunningTime="2025-11-25 15:58:38.423956635 +0000 UTC m=+3951.076099036" Nov 25 15:58:42 crc kubenswrapper[4806]: I1125 15:58:42.028262 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:42 crc kubenswrapper[4806]: I1125 15:58:42.028915 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:42 crc kubenswrapper[4806]: I1125 15:58:42.085495 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:58:47 crc kubenswrapper[4806]: I1125 15:58:47.089328 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:58:47 crc kubenswrapper[4806]: E1125 15:58:47.089943 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:58:52 crc kubenswrapper[4806]: I1125 15:58:52.118979 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:59:01 crc kubenswrapper[4806]: I1125 15:59:01.089378 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:01 crc kubenswrapper[4806]: E1125 15:59:01.090157 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:59:01 crc kubenswrapper[4806]: I1125 15:59:01.615179 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:59:01 crc kubenswrapper[4806]: I1125 15:59:01.615441 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hg5mk" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="registry-server" containerID="cri-o://d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" gracePeriod=2 Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.029817 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 is running failed: container process not found" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.030843 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 is running failed: container process not found" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.031551 4806 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 is running failed: container process not found" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.031594 4806 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-hg5mk" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="registry-server" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.374847 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.392395 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx8d9\" (UniqueName: \"kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9\") pod \"20de0297-9732-4a9a-a523-33c47d423398\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.392609 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content\") pod \"20de0297-9732-4a9a-a523-33c47d423398\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.392870 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities\") pod \"20de0297-9732-4a9a-a523-33c47d423398\" (UID: \"20de0297-9732-4a9a-a523-33c47d423398\") " Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.393433 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities" (OuterVolumeSpecName: "utilities") pod "20de0297-9732-4a9a-a523-33c47d423398" (UID: "20de0297-9732-4a9a-a523-33c47d423398"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.398789 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9" (OuterVolumeSpecName: "kube-api-access-qx8d9") pod "20de0297-9732-4a9a-a523-33c47d423398" (UID: "20de0297-9732-4a9a-a523-33c47d423398"). InnerVolumeSpecName "kube-api-access-qx8d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.461786 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20de0297-9732-4a9a-a523-33c47d423398" (UID: "20de0297-9732-4a9a-a523-33c47d423398"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.495544 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.495582 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx8d9\" (UniqueName: \"kubernetes.io/projected/20de0297-9732-4a9a-a523-33c47d423398-kube-api-access-qx8d9\") on node \"crc\" DevicePath \"\"" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.495591 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20de0297-9732-4a9a-a523-33c47d423398-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.640560 4806 generic.go:334] "Generic (PLEG): container finished" podID="20de0297-9732-4a9a-a523-33c47d423398" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" exitCode=0 Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.640610 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerDied","Data":"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08"} Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.640649 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hg5mk" event={"ID":"20de0297-9732-4a9a-a523-33c47d423398","Type":"ContainerDied","Data":"4dabbd15a0df2d6ee8f94e5682f74c4c27273da7c7a5941671da1b8f75928eef"} Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.640670 4806 scope.go:117] "RemoveContainer" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.640993 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hg5mk" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.692074 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.710871 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hg5mk"] Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.717171 4806 scope.go:117] "RemoveContainer" containerID="157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.783047 4806 scope.go:117] "RemoveContainer" containerID="107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.827377 4806 scope.go:117] "RemoveContainer" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.827891 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08\": container with ID starting with d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 not found: ID does not exist" containerID="d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.827991 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08"} err="failed to get container status \"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08\": rpc error: code = NotFound desc = could not find container \"d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08\": container with ID starting with d2ede6d1c80528333a2feabfa27217a8f1bcbf998a80a348810100d5a54acf08 not found: ID does not exist" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.828020 4806 scope.go:117] "RemoveContainer" containerID="157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c" Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.828544 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c\": container with ID starting with 157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c not found: ID does not exist" containerID="157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.828603 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c"} err="failed to get container status \"157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c\": rpc error: code = NotFound desc = could not find container \"157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c\": container with ID starting with 157c4c11278dad4542dc3ec8a059d41187619bc08e1717dde7eb77bb8194569c not found: ID does not exist" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.828639 4806 scope.go:117] "RemoveContainer" containerID="107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51" Nov 25 15:59:02 crc kubenswrapper[4806]: E1125 15:59:02.829074 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51\": container with ID starting with 107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51 not found: ID does not exist" containerID="107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51" Nov 25 15:59:02 crc kubenswrapper[4806]: I1125 15:59:02.829134 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51"} err="failed to get container status \"107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51\": rpc error: code = NotFound desc = could not find container \"107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51\": container with ID starting with 107377d16a326914ee075471c0551340c8154cbff76279a0c96f1b2af3fd7b51 not found: ID does not exist" Nov 25 15:59:04 crc kubenswrapper[4806]: I1125 15:59:04.102077 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20de0297-9732-4a9a-a523-33c47d423398" path="/var/lib/kubelet/pods/20de0297-9732-4a9a-a523-33c47d423398/volumes" Nov 25 15:59:12 crc kubenswrapper[4806]: I1125 15:59:12.090008 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:12 crc kubenswrapper[4806]: E1125 15:59:12.090837 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:59:21 crc kubenswrapper[4806]: I1125 15:59:21.238897 4806 scope.go:117] "RemoveContainer" containerID="3a4a0ad35eb618fd1588fb328ed113501aa7d824216014f6f3bf930331b2ce5b" Nov 25 15:59:24 crc kubenswrapper[4806]: I1125 15:59:24.090935 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:24 crc kubenswrapper[4806]: E1125 15:59:24.091790 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:59:37 crc kubenswrapper[4806]: I1125 15:59:37.089873 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:37 crc kubenswrapper[4806]: E1125 15:59:37.090599 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:59:48 crc kubenswrapper[4806]: I1125 15:59:48.099207 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:48 crc kubenswrapper[4806]: E1125 15:59:48.100134 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 15:59:59 crc kubenswrapper[4806]: I1125 15:59:59.089394 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 15:59:59 crc kubenswrapper[4806]: E1125 15:59:59.091246 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.192860 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw"] Nov 25 16:00:00 crc kubenswrapper[4806]: E1125 16:00:00.193810 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="registry-server" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.193831 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="registry-server" Nov 25 16:00:00 crc kubenswrapper[4806]: E1125 16:00:00.193857 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="extract-content" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.193865 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="extract-content" Nov 25 16:00:00 crc kubenswrapper[4806]: E1125 16:00:00.193897 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="extract-utilities" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.193907 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="extract-utilities" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.194252 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="20de0297-9732-4a9a-a523-33c47d423398" containerName="registry-server" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.195228 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.198195 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.198622 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.211673 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw"] Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.324738 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.324868 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b58bf\" (UniqueName: \"kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.324926 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.427234 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.427489 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b58bf\" (UniqueName: \"kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.427592 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.428155 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.450208 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b58bf\" (UniqueName: \"kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.452016 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume\") pod \"collect-profiles-29401440-ngrhw\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:00 crc kubenswrapper[4806]: I1125 16:00:00.523016 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:01 crc kubenswrapper[4806]: I1125 16:00:01.043018 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw"] Nov 25 16:00:01 crc kubenswrapper[4806]: I1125 16:00:01.277783 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" event={"ID":"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1","Type":"ContainerStarted","Data":"159e8641fe2d63643e0a064793689ca99cc25dbbcb22cbdb366281cbf40ebf89"} Nov 25 16:00:02 crc kubenswrapper[4806]: I1125 16:00:02.290484 4806 generic.go:334] "Generic (PLEG): container finished" podID="1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" containerID="88048d09771229517b9ccfc2f6af56bb56dfb5af557f78ddec8ac0c8483b49b1" exitCode=0 Nov 25 16:00:02 crc kubenswrapper[4806]: I1125 16:00:02.290783 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" event={"ID":"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1","Type":"ContainerDied","Data":"88048d09771229517b9ccfc2f6af56bb56dfb5af557f78ddec8ac0c8483b49b1"} Nov 25 16:00:03 crc kubenswrapper[4806]: I1125 16:00:03.841142 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.002151 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b58bf\" (UniqueName: \"kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf\") pod \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.002621 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume\") pod \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.002790 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume\") pod \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\" (UID: \"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1\") " Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.003428 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume" (OuterVolumeSpecName: "config-volume") pod "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" (UID: "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.009159 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf" (OuterVolumeSpecName: "kube-api-access-b58bf") pod "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" (UID: "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1"). InnerVolumeSpecName "kube-api-access-b58bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.009491 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" (UID: "1f1d2d73-1725-4eda-95ca-5ca3c0434eb1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.105964 4806 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.106013 4806 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.106026 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b58bf\" (UniqueName: \"kubernetes.io/projected/1f1d2d73-1725-4eda-95ca-5ca3c0434eb1-kube-api-access-b58bf\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.315312 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" event={"ID":"1f1d2d73-1725-4eda-95ca-5ca3c0434eb1","Type":"ContainerDied","Data":"159e8641fe2d63643e0a064793689ca99cc25dbbcb22cbdb366281cbf40ebf89"} Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.315445 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="159e8641fe2d63643e0a064793689ca99cc25dbbcb22cbdb366281cbf40ebf89" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.315450 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-ngrhw" Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.960789 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9"] Nov 25 16:00:04 crc kubenswrapper[4806]: I1125 16:00:04.971742 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-8j2s9"] Nov 25 16:00:06 crc kubenswrapper[4806]: I1125 16:00:06.108080 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98013fa5-ca9f-4800-a63d-be400f825cfa" path="/var/lib/kubelet/pods/98013fa5-ca9f-4800-a63d-be400f825cfa/volumes" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.754656 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dht67/must-gather-lkxmb"] Nov 25 16:00:12 crc kubenswrapper[4806]: E1125 16:00:12.757376 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" containerName="collect-profiles" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.757459 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" containerName="collect-profiles" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.757749 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f1d2d73-1725-4eda-95ca-5ca3c0434eb1" containerName="collect-profiles" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.759031 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.770594 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dht67"/"openshift-service-ca.crt" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.771312 4806 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dht67"/"default-dockercfg-jv9n4" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.771679 4806 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dht67"/"kube-root-ca.crt" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.779843 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dht67/must-gather-lkxmb"] Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.812883 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc52b\" (UniqueName: \"kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.812964 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.914855 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc52b\" (UniqueName: \"kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.914974 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.915683 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:12 crc kubenswrapper[4806]: I1125 16:00:12.943235 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc52b\" (UniqueName: \"kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b\") pod \"must-gather-lkxmb\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:13 crc kubenswrapper[4806]: I1125 16:00:13.104840 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:00:13 crc kubenswrapper[4806]: I1125 16:00:13.741348 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dht67/must-gather-lkxmb"] Nov 25 16:00:14 crc kubenswrapper[4806]: I1125 16:00:14.090885 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:00:14 crc kubenswrapper[4806]: E1125 16:00:14.091463 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:00:14 crc kubenswrapper[4806]: I1125 16:00:14.431861 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/must-gather-lkxmb" event={"ID":"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9","Type":"ContainerStarted","Data":"f0689a7bae9fd251661ba9826105e425724e7456dcd6a6fbfb324a69d3505cea"} Nov 25 16:00:15 crc kubenswrapper[4806]: I1125 16:00:15.445497 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/must-gather-lkxmb" event={"ID":"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9","Type":"ContainerStarted","Data":"4b5609394cb2e0f2a26202a182e69a8ae0e723c92dc40a087920e2c6fbff27a9"} Nov 25 16:00:15 crc kubenswrapper[4806]: I1125 16:00:15.445804 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/must-gather-lkxmb" event={"ID":"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9","Type":"ContainerStarted","Data":"907a776d8feaf2c2eed2794924a7902c3020b9168988909e8010cb3b75d3d60b"} Nov 25 16:00:15 crc kubenswrapper[4806]: I1125 16:00:15.465129 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dht67/must-gather-lkxmb" podStartSLOduration=3.465101793 podStartE2EDuration="3.465101793s" podCreationTimestamp="2025-11-25 16:00:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:00:15.458522976 +0000 UTC m=+4048.110665407" watchObservedRunningTime="2025-11-25 16:00:15.465101793 +0000 UTC m=+4048.117244214" Nov 25 16:00:21 crc kubenswrapper[4806]: I1125 16:00:21.569072 4806 scope.go:117] "RemoveContainer" containerID="755da102609b5c0aee43723c833f6faab5d59a54dcb3ddd9c27202469632803d" Nov 25 16:00:21 crc kubenswrapper[4806]: I1125 16:00:21.617856 4806 scope.go:117] "RemoveContainer" containerID="92b1560a0160a0d3cf2c66a71c51cada54dd161ae8d2df4d754c10b24706499f" Nov 25 16:00:21 crc kubenswrapper[4806]: I1125 16:00:21.971285 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dht67/crc-debug-5vx8f"] Nov 25 16:00:21 crc kubenswrapper[4806]: I1125 16:00:21.972947 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.063102 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.063270 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvzlj\" (UniqueName: \"kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.165083 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvzlj\" (UniqueName: \"kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.165366 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.165489 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.190205 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvzlj\" (UniqueName: \"kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj\") pod \"crc-debug-5vx8f\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.291564 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.995989 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-5vx8f" event={"ID":"59f79a27-531f-443a-9b43-30b0533ea445","Type":"ContainerStarted","Data":"49b36613bff23592ffcdb83a96fb6a6593c1e2f10158691b481e94eabc32d927"} Nov 25 16:00:22 crc kubenswrapper[4806]: I1125 16:00:22.996572 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-5vx8f" event={"ID":"59f79a27-531f-443a-9b43-30b0533ea445","Type":"ContainerStarted","Data":"a43e9fb04b2b77d034d83cf5f5c333d2e1c1c6b7d4c36e773844a7e8e8806414"} Nov 25 16:00:23 crc kubenswrapper[4806]: I1125 16:00:23.028793 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dht67/crc-debug-5vx8f" podStartSLOduration=2.028763998 podStartE2EDuration="2.028763998s" podCreationTimestamp="2025-11-25 16:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:00:23.011015374 +0000 UTC m=+4055.663157785" watchObservedRunningTime="2025-11-25 16:00:23.028763998 +0000 UTC m=+4055.680906409" Nov 25 16:00:25 crc kubenswrapper[4806]: I1125 16:00:25.090402 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:00:25 crc kubenswrapper[4806]: E1125 16:00:25.092443 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:00:37 crc kubenswrapper[4806]: I1125 16:00:37.089144 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:00:37 crc kubenswrapper[4806]: E1125 16:00:37.089883 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:00:52 crc kubenswrapper[4806]: I1125 16:00:52.089056 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:00:52 crc kubenswrapper[4806]: E1125 16:00:52.089769 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.603792 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.618497 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.620975 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.741422 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9475j\" (UniqueName: \"kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.741540 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.741573 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.843252 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.843516 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9475j\" (UniqueName: \"kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.843611 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.844124 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.844403 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.868214 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9475j\" (UniqueName: \"kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j\") pod \"certified-operators-mhw8t\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:58 crc kubenswrapper[4806]: I1125 16:00:58.952644 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:00:59 crc kubenswrapper[4806]: I1125 16:00:59.541586 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.164571 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401441-wg2wq"] Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.166784 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.177024 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401441-wg2wq"] Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.285396 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.285446 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.285693 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.286007 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtsm6\" (UniqueName: \"kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.387722 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.387911 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtsm6\" (UniqueName: \"kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.388080 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.388114 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.395980 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.403139 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.408431 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.410130 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtsm6\" (UniqueName: \"kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6\") pod \"keystone-cron-29401441-wg2wq\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.436263 4806 generic.go:334] "Generic (PLEG): container finished" podID="101c3770-ec02-44a6-8020-c77559ce5959" containerID="a3a06bb488896c36762f2474906a381e0205f481ace83ef10df878c05c6c15b7" exitCode=0 Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.436339 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerDied","Data":"a3a06bb488896c36762f2474906a381e0205f481ace83ef10df878c05c6c15b7"} Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.436379 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerStarted","Data":"86228036ee8b0a3f3b01b11b4ad63cb6094b030c38e361ebcc85828602aff7ff"} Nov 25 16:01:00 crc kubenswrapper[4806]: I1125 16:01:00.487632 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:01 crc kubenswrapper[4806]: I1125 16:01:01.086307 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401441-wg2wq"] Nov 25 16:01:01 crc kubenswrapper[4806]: I1125 16:01:01.453148 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-wg2wq" event={"ID":"e7a91b74-ad99-4159-9bea-374d0734af57","Type":"ContainerStarted","Data":"a4ec5036a73f052a6aa1dfaaa64e96b46572dd61228047620b16e67e03f9f252"} Nov 25 16:01:01 crc kubenswrapper[4806]: I1125 16:01:01.453400 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-wg2wq" event={"ID":"e7a91b74-ad99-4159-9bea-374d0734af57","Type":"ContainerStarted","Data":"397a7c6803ec89bb504ac212699880662116e6f68a1490f607acc208df53c6dc"} Nov 25 16:01:02 crc kubenswrapper[4806]: I1125 16:01:02.463544 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerStarted","Data":"83919657ad96dca2cadc38bc2d1d16e584240c49c9a1ee86eb7413aaada0c53c"} Nov 25 16:01:02 crc kubenswrapper[4806]: I1125 16:01:02.521616 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401441-wg2wq" podStartSLOduration=2.521598706 podStartE2EDuration="2.521598706s" podCreationTimestamp="2025-11-25 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:01:02.515071271 +0000 UTC m=+4095.167213682" watchObservedRunningTime="2025-11-25 16:01:02.521598706 +0000 UTC m=+4095.173741107" Nov 25 16:01:04 crc kubenswrapper[4806]: I1125 16:01:04.483915 4806 generic.go:334] "Generic (PLEG): container finished" podID="59f79a27-531f-443a-9b43-30b0533ea445" containerID="49b36613bff23592ffcdb83a96fb6a6593c1e2f10158691b481e94eabc32d927" exitCode=0 Nov 25 16:01:04 crc kubenswrapper[4806]: I1125 16:01:04.484022 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-5vx8f" event={"ID":"59f79a27-531f-443a-9b43-30b0533ea445","Type":"ContainerDied","Data":"49b36613bff23592ffcdb83a96fb6a6593c1e2f10158691b481e94eabc32d927"} Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.497716 4806 generic.go:334] "Generic (PLEG): container finished" podID="101c3770-ec02-44a6-8020-c77559ce5959" containerID="83919657ad96dca2cadc38bc2d1d16e584240c49c9a1ee86eb7413aaada0c53c" exitCode=0 Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.498476 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerDied","Data":"83919657ad96dca2cadc38bc2d1d16e584240c49c9a1ee86eb7413aaada0c53c"} Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.656291 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.700929 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvzlj\" (UniqueName: \"kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj\") pod \"59f79a27-531f-443a-9b43-30b0533ea445\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.701618 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host\") pod \"59f79a27-531f-443a-9b43-30b0533ea445\" (UID: \"59f79a27-531f-443a-9b43-30b0533ea445\") " Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.701682 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host" (OuterVolumeSpecName: "host") pod "59f79a27-531f-443a-9b43-30b0533ea445" (UID: "59f79a27-531f-443a-9b43-30b0533ea445"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.702276 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dht67/crc-debug-5vx8f"] Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.702765 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/59f79a27-531f-443a-9b43-30b0533ea445-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.706342 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj" (OuterVolumeSpecName: "kube-api-access-wvzlj") pod "59f79a27-531f-443a-9b43-30b0533ea445" (UID: "59f79a27-531f-443a-9b43-30b0533ea445"). InnerVolumeSpecName "kube-api-access-wvzlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.725769 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dht67/crc-debug-5vx8f"] Nov 25 16:01:05 crc kubenswrapper[4806]: I1125 16:01:05.804742 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvzlj\" (UniqueName: \"kubernetes.io/projected/59f79a27-531f-443a-9b43-30b0533ea445-kube-api-access-wvzlj\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.101362 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59f79a27-531f-443a-9b43-30b0533ea445" path="/var/lib/kubelet/pods/59f79a27-531f-443a-9b43-30b0533ea445/volumes" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.508229 4806 scope.go:117] "RemoveContainer" containerID="49b36613bff23592ffcdb83a96fb6a6593c1e2f10158691b481e94eabc32d927" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.508257 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-5vx8f" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.981506 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dht67/crc-debug-gn5sd"] Nov 25 16:01:06 crc kubenswrapper[4806]: E1125 16:01:06.982169 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59f79a27-531f-443a-9b43-30b0533ea445" containerName="container-00" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.982273 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="59f79a27-531f-443a-9b43-30b0533ea445" containerName="container-00" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.982655 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="59f79a27-531f-443a-9b43-30b0533ea445" containerName="container-00" Nov 25 16:01:06 crc kubenswrapper[4806]: I1125 16:01:06.983640 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.014366 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.016766 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.024950 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.030671 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68kdr\" (UniqueName: \"kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.030983 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.092590 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:01:07 crc kubenswrapper[4806]: E1125 16:01:07.092805 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.136540 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.136661 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.136712 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl2bl\" (UniqueName: \"kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.136828 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.136879 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68kdr\" (UniqueName: \"kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.138081 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.179752 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68kdr\" (UniqueName: \"kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr\") pod \"crc-debug-gn5sd\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.238345 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.238424 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.238507 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl2bl\" (UniqueName: \"kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.239128 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.239341 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.272374 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl2bl\" (UniqueName: \"kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl\") pod \"redhat-operators-rtdkm\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.304966 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.336926 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.525813 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-gn5sd" event={"ID":"5a501941-f113-4c22-9269-3360f663aae7","Type":"ContainerStarted","Data":"2729fb4b45d93ef936954ed1f3b843350803606accdc1e776b68c0beb49d686a"} Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.532332 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerStarted","Data":"4f1837ad3f18bbf3bbf88d91f91bcdb9d4a73257ea26866995f0ce02e201960a"} Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.565688 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhw8t" podStartSLOduration=3.678016294 podStartE2EDuration="9.565669999s" podCreationTimestamp="2025-11-25 16:00:58 +0000 UTC" firstStartedPulling="2025-11-25 16:01:00.439634 +0000 UTC m=+4093.091776411" lastFinishedPulling="2025-11-25 16:01:06.327287705 +0000 UTC m=+4098.979430116" observedRunningTime="2025-11-25 16:01:07.554349897 +0000 UTC m=+4100.206492308" watchObservedRunningTime="2025-11-25 16:01:07.565669999 +0000 UTC m=+4100.217812410" Nov 25 16:01:07 crc kubenswrapper[4806]: I1125 16:01:07.974602 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:08 crc kubenswrapper[4806]: I1125 16:01:08.544218 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerStarted","Data":"549e01ba3df2be5602508bc8e16e4b4ef6df8f8f9829242604f75f2d14ddd004"} Nov 25 16:01:08 crc kubenswrapper[4806]: I1125 16:01:08.546458 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-gn5sd" event={"ID":"5a501941-f113-4c22-9269-3360f663aae7","Type":"ContainerStarted","Data":"d9fe6aee15f8e4740e6f83217eb91f1b9488f28ad89604fa0dadb8b153e8f9e7"} Nov 25 16:01:08 crc kubenswrapper[4806]: I1125 16:01:08.953202 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:08 crc kubenswrapper[4806]: I1125 16:01:08.954174 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:09 crc kubenswrapper[4806]: I1125 16:01:09.015552 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:09 crc kubenswrapper[4806]: I1125 16:01:09.558261 4806 generic.go:334] "Generic (PLEG): container finished" podID="f77fb5e7-b393-4553-9464-219ea8261944" containerID="2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02" exitCode=0 Nov 25 16:01:09 crc kubenswrapper[4806]: I1125 16:01:09.558362 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerDied","Data":"2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02"} Nov 25 16:01:09 crc kubenswrapper[4806]: I1125 16:01:09.563353 4806 generic.go:334] "Generic (PLEG): container finished" podID="5a501941-f113-4c22-9269-3360f663aae7" containerID="d9fe6aee15f8e4740e6f83217eb91f1b9488f28ad89604fa0dadb8b153e8f9e7" exitCode=0 Nov 25 16:01:09 crc kubenswrapper[4806]: I1125 16:01:09.563403 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-gn5sd" event={"ID":"5a501941-f113-4c22-9269-3360f663aae7","Type":"ContainerDied","Data":"d9fe6aee15f8e4740e6f83217eb91f1b9488f28ad89604fa0dadb8b153e8f9e7"} Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.693758 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.697843 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dht67/crc-debug-gn5sd"] Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.707986 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dht67/crc-debug-gn5sd"] Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.837490 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68kdr\" (UniqueName: \"kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr\") pod \"5a501941-f113-4c22-9269-3360f663aae7\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.837631 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host\") pod \"5a501941-f113-4c22-9269-3360f663aae7\" (UID: \"5a501941-f113-4c22-9269-3360f663aae7\") " Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.837761 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host" (OuterVolumeSpecName: "host") pod "5a501941-f113-4c22-9269-3360f663aae7" (UID: "5a501941-f113-4c22-9269-3360f663aae7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.838095 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5a501941-f113-4c22-9269-3360f663aae7-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.844325 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr" (OuterVolumeSpecName: "kube-api-access-68kdr") pod "5a501941-f113-4c22-9269-3360f663aae7" (UID: "5a501941-f113-4c22-9269-3360f663aae7"). InnerVolumeSpecName "kube-api-access-68kdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:10 crc kubenswrapper[4806]: I1125 16:01:10.940144 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68kdr\" (UniqueName: \"kubernetes.io/projected/5a501941-f113-4c22-9269-3360f663aae7-kube-api-access-68kdr\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:11 crc kubenswrapper[4806]: I1125 16:01:11.591075 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerStarted","Data":"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c"} Nov 25 16:01:11 crc kubenswrapper[4806]: I1125 16:01:11.594426 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2729fb4b45d93ef936954ed1f3b843350803606accdc1e776b68c0beb49d686a" Nov 25 16:01:11 crc kubenswrapper[4806]: I1125 16:01:11.594509 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-gn5sd" Nov 25 16:01:11 crc kubenswrapper[4806]: I1125 16:01:11.600995 4806 generic.go:334] "Generic (PLEG): container finished" podID="e7a91b74-ad99-4159-9bea-374d0734af57" containerID="a4ec5036a73f052a6aa1dfaaa64e96b46572dd61228047620b16e67e03f9f252" exitCode=0 Nov 25 16:01:11 crc kubenswrapper[4806]: I1125 16:01:11.601048 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-wg2wq" event={"ID":"e7a91b74-ad99-4159-9bea-374d0734af57","Type":"ContainerDied","Data":"a4ec5036a73f052a6aa1dfaaa64e96b46572dd61228047620b16e67e03f9f252"} Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.019974 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dht67/crc-debug-j2kl4"] Nov 25 16:01:12 crc kubenswrapper[4806]: E1125 16:01:12.025127 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a501941-f113-4c22-9269-3360f663aae7" containerName="container-00" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.025238 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a501941-f113-4c22-9269-3360f663aae7" containerName="container-00" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.025549 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a501941-f113-4c22-9269-3360f663aae7" containerName="container-00" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.026391 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.103147 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a501941-f113-4c22-9269-3360f663aae7" path="/var/lib/kubelet/pods/5a501941-f113-4c22-9269-3360f663aae7/volumes" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.171450 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvft\" (UniqueName: \"kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.171588 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.274283 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.274470 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.274517 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvft\" (UniqueName: \"kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.293620 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvft\" (UniqueName: \"kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft\") pod \"crc-debug-j2kl4\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.346377 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:12 crc kubenswrapper[4806]: W1125 16:01:12.383353 4806 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode01779d7_9369_4952_ae6f_af5618f075ef.slice/crio-4b03ce757cd95cf51c579aa74161d1d4098d3c3e89ff7f4dbd6b33eb9734f476 WatchSource:0}: Error finding container 4b03ce757cd95cf51c579aa74161d1d4098d3c3e89ff7f4dbd6b33eb9734f476: Status 404 returned error can't find the container with id 4b03ce757cd95cf51c579aa74161d1d4098d3c3e89ff7f4dbd6b33eb9734f476 Nov 25 16:01:12 crc kubenswrapper[4806]: I1125 16:01:12.613848 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-j2kl4" event={"ID":"e01779d7-9369-4952-ae6f-af5618f075ef","Type":"ContainerStarted","Data":"4b03ce757cd95cf51c579aa74161d1d4098d3c3e89ff7f4dbd6b33eb9734f476"} Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.205855 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.300828 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtsm6\" (UniqueName: \"kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6\") pod \"e7a91b74-ad99-4159-9bea-374d0734af57\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.300919 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle\") pod \"e7a91b74-ad99-4159-9bea-374d0734af57\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.301097 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data\") pod \"e7a91b74-ad99-4159-9bea-374d0734af57\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.301145 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys\") pod \"e7a91b74-ad99-4159-9bea-374d0734af57\" (UID: \"e7a91b74-ad99-4159-9bea-374d0734af57\") " Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.310459 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e7a91b74-ad99-4159-9bea-374d0734af57" (UID: "e7a91b74-ad99-4159-9bea-374d0734af57"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.313976 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6" (OuterVolumeSpecName: "kube-api-access-gtsm6") pod "e7a91b74-ad99-4159-9bea-374d0734af57" (UID: "e7a91b74-ad99-4159-9bea-374d0734af57"). InnerVolumeSpecName "kube-api-access-gtsm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.351181 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7a91b74-ad99-4159-9bea-374d0734af57" (UID: "e7a91b74-ad99-4159-9bea-374d0734af57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.403665 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtsm6\" (UniqueName: \"kubernetes.io/projected/e7a91b74-ad99-4159-9bea-374d0734af57-kube-api-access-gtsm6\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.403693 4806 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.403703 4806 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.418443 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data" (OuterVolumeSpecName: "config-data") pod "e7a91b74-ad99-4159-9bea-374d0734af57" (UID: "e7a91b74-ad99-4159-9bea-374d0734af57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.505501 4806 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7a91b74-ad99-4159-9bea-374d0734af57-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.623367 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-wg2wq" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.623360 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-wg2wq" event={"ID":"e7a91b74-ad99-4159-9bea-374d0734af57","Type":"ContainerDied","Data":"397a7c6803ec89bb504ac212699880662116e6f68a1490f607acc208df53c6dc"} Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.623755 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="397a7c6803ec89bb504ac212699880662116e6f68a1490f607acc208df53c6dc" Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.625409 4806 generic.go:334] "Generic (PLEG): container finished" podID="e01779d7-9369-4952-ae6f-af5618f075ef" containerID="e1ca3cbaad19ecd069b3d66932077b0bcf0a0e92b08524206e06447ccac12ba2" exitCode=0 Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.625565 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/crc-debug-j2kl4" event={"ID":"e01779d7-9369-4952-ae6f-af5618f075ef","Type":"ContainerDied","Data":"e1ca3cbaad19ecd069b3d66932077b0bcf0a0e92b08524206e06447ccac12ba2"} Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.660234 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dht67/crc-debug-j2kl4"] Nov 25 16:01:13 crc kubenswrapper[4806]: I1125 16:01:13.670163 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dht67/crc-debug-j2kl4"] Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.746542 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.831215 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host\") pod \"e01779d7-9369-4952-ae6f-af5618f075ef\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.831373 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host" (OuterVolumeSpecName: "host") pod "e01779d7-9369-4952-ae6f-af5618f075ef" (UID: "e01779d7-9369-4952-ae6f-af5618f075ef"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.831496 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjvft\" (UniqueName: \"kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft\") pod \"e01779d7-9369-4952-ae6f-af5618f075ef\" (UID: \"e01779d7-9369-4952-ae6f-af5618f075ef\") " Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.832151 4806 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e01779d7-9369-4952-ae6f-af5618f075ef-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.836197 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft" (OuterVolumeSpecName: "kube-api-access-hjvft") pod "e01779d7-9369-4952-ae6f-af5618f075ef" (UID: "e01779d7-9369-4952-ae6f-af5618f075ef"). InnerVolumeSpecName "kube-api-access-hjvft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:14 crc kubenswrapper[4806]: I1125 16:01:14.934493 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjvft\" (UniqueName: \"kubernetes.io/projected/e01779d7-9369-4952-ae6f-af5618f075ef-kube-api-access-hjvft\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:15 crc kubenswrapper[4806]: I1125 16:01:15.646309 4806 scope.go:117] "RemoveContainer" containerID="e1ca3cbaad19ecd069b3d66932077b0bcf0a0e92b08524206e06447ccac12ba2" Nov 25 16:01:15 crc kubenswrapper[4806]: I1125 16:01:15.646383 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/crc-debug-j2kl4" Nov 25 16:01:16 crc kubenswrapper[4806]: I1125 16:01:16.103404 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e01779d7-9369-4952-ae6f-af5618f075ef" path="/var/lib/kubelet/pods/e01779d7-9369-4952-ae6f-af5618f075ef/volumes" Nov 25 16:01:19 crc kubenswrapper[4806]: I1125 16:01:19.036878 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:19 crc kubenswrapper[4806]: I1125 16:01:19.096668 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:01:19 crc kubenswrapper[4806]: I1125 16:01:19.687565 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhw8t" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="registry-server" containerID="cri-o://4f1837ad3f18bbf3bbf88d91f91bcdb9d4a73257ea26866995f0ce02e201960a" gracePeriod=2 Nov 25 16:01:20 crc kubenswrapper[4806]: I1125 16:01:20.701017 4806 generic.go:334] "Generic (PLEG): container finished" podID="101c3770-ec02-44a6-8020-c77559ce5959" containerID="4f1837ad3f18bbf3bbf88d91f91bcdb9d4a73257ea26866995f0ce02e201960a" exitCode=0 Nov 25 16:01:20 crc kubenswrapper[4806]: I1125 16:01:20.701106 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerDied","Data":"4f1837ad3f18bbf3bbf88d91f91bcdb9d4a73257ea26866995f0ce02e201960a"} Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.088908 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:01:21 crc kubenswrapper[4806]: E1125 16:01:21.089166 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.562987 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.714599 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhw8t" event={"ID":"101c3770-ec02-44a6-8020-c77559ce5959","Type":"ContainerDied","Data":"86228036ee8b0a3f3b01b11b4ad63cb6094b030c38e361ebcc85828602aff7ff"} Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.714932 4806 scope.go:117] "RemoveContainer" containerID="4f1837ad3f18bbf3bbf88d91f91bcdb9d4a73257ea26866995f0ce02e201960a" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.714709 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhw8t" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.725176 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities\") pod \"101c3770-ec02-44a6-8020-c77559ce5959\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.725423 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content\") pod \"101c3770-ec02-44a6-8020-c77559ce5959\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.726121 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9475j\" (UniqueName: \"kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j\") pod \"101c3770-ec02-44a6-8020-c77559ce5959\" (UID: \"101c3770-ec02-44a6-8020-c77559ce5959\") " Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.726960 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities" (OuterVolumeSpecName: "utilities") pod "101c3770-ec02-44a6-8020-c77559ce5959" (UID: "101c3770-ec02-44a6-8020-c77559ce5959"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.733457 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j" (OuterVolumeSpecName: "kube-api-access-9475j") pod "101c3770-ec02-44a6-8020-c77559ce5959" (UID: "101c3770-ec02-44a6-8020-c77559ce5959"). InnerVolumeSpecName "kube-api-access-9475j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.764029 4806 scope.go:117] "RemoveContainer" containerID="83919657ad96dca2cadc38bc2d1d16e584240c49c9a1ee86eb7413aaada0c53c" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.766243 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "101c3770-ec02-44a6-8020-c77559ce5959" (UID: "101c3770-ec02-44a6-8020-c77559ce5959"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.830874 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9475j\" (UniqueName: \"kubernetes.io/projected/101c3770-ec02-44a6-8020-c77559ce5959-kube-api-access-9475j\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.831205 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.831325 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/101c3770-ec02-44a6-8020-c77559ce5959-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:21 crc kubenswrapper[4806]: I1125 16:01:21.868571 4806 scope.go:117] "RemoveContainer" containerID="a3a06bb488896c36762f2474906a381e0205f481ace83ef10df878c05c6c15b7" Nov 25 16:01:22 crc kubenswrapper[4806]: I1125 16:01:22.052241 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:01:22 crc kubenswrapper[4806]: E1125 16:01:22.056159 4806 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf77fb5e7_b393_4553_9464_219ea8261944.slice/crio-1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c.scope\": RecentStats: unable to find data in memory cache]" Nov 25 16:01:22 crc kubenswrapper[4806]: I1125 16:01:22.064437 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhw8t"] Nov 25 16:01:22 crc kubenswrapper[4806]: I1125 16:01:22.104585 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="101c3770-ec02-44a6-8020-c77559ce5959" path="/var/lib/kubelet/pods/101c3770-ec02-44a6-8020-c77559ce5959/volumes" Nov 25 16:01:22 crc kubenswrapper[4806]: I1125 16:01:22.729575 4806 generic.go:334] "Generic (PLEG): container finished" podID="f77fb5e7-b393-4553-9464-219ea8261944" containerID="1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c" exitCode=0 Nov 25 16:01:22 crc kubenswrapper[4806]: I1125 16:01:22.729677 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerDied","Data":"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c"} Nov 25 16:01:23 crc kubenswrapper[4806]: I1125 16:01:23.746755 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerStarted","Data":"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f"} Nov 25 16:01:23 crc kubenswrapper[4806]: I1125 16:01:23.770927 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtdkm" podStartSLOduration=4.091693934 podStartE2EDuration="17.770906285s" podCreationTimestamp="2025-11-25 16:01:06 +0000 UTC" firstStartedPulling="2025-11-25 16:01:09.560100059 +0000 UTC m=+4102.212242470" lastFinishedPulling="2025-11-25 16:01:23.23931241 +0000 UTC m=+4115.891454821" observedRunningTime="2025-11-25 16:01:23.766897992 +0000 UTC m=+4116.419040403" watchObservedRunningTime="2025-11-25 16:01:23.770906285 +0000 UTC m=+4116.423048696" Nov 25 16:01:27 crc kubenswrapper[4806]: I1125 16:01:27.338171 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:27 crc kubenswrapper[4806]: I1125 16:01:27.338722 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:28 crc kubenswrapper[4806]: I1125 16:01:28.383693 4806 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rtdkm" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:28 crc kubenswrapper[4806]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:28 crc kubenswrapper[4806]: > Nov 25 16:01:32 crc kubenswrapper[4806]: I1125 16:01:32.089889 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:01:32 crc kubenswrapper[4806]: E1125 16:01:32.090520 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:01:37 crc kubenswrapper[4806]: I1125 16:01:37.397649 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:37 crc kubenswrapper[4806]: I1125 16:01:37.461286 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:38 crc kubenswrapper[4806]: I1125 16:01:38.170425 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:38 crc kubenswrapper[4806]: I1125 16:01:38.938958 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtdkm" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="registry-server" containerID="cri-o://b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f" gracePeriod=2 Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.754131 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.831511 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities\") pod \"f77fb5e7-b393-4553-9464-219ea8261944\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.831670 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content\") pod \"f77fb5e7-b393-4553-9464-219ea8261944\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.831735 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl2bl\" (UniqueName: \"kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl\") pod \"f77fb5e7-b393-4553-9464-219ea8261944\" (UID: \"f77fb5e7-b393-4553-9464-219ea8261944\") " Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.832679 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities" (OuterVolumeSpecName: "utilities") pod "f77fb5e7-b393-4553-9464-219ea8261944" (UID: "f77fb5e7-b393-4553-9464-219ea8261944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.837118 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl" (OuterVolumeSpecName: "kube-api-access-gl2bl") pod "f77fb5e7-b393-4553-9464-219ea8261944" (UID: "f77fb5e7-b393-4553-9464-219ea8261944"). InnerVolumeSpecName "kube-api-access-gl2bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.936089 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.936452 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl2bl\" (UniqueName: \"kubernetes.io/projected/f77fb5e7-b393-4553-9464-219ea8261944-kube-api-access-gl2bl\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.946713 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f77fb5e7-b393-4553-9464-219ea8261944" (UID: "f77fb5e7-b393-4553-9464-219ea8261944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.958259 4806 generic.go:334] "Generic (PLEG): container finished" podID="f77fb5e7-b393-4553-9464-219ea8261944" containerID="b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f" exitCode=0 Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.958329 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerDied","Data":"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f"} Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.958363 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtdkm" event={"ID":"f77fb5e7-b393-4553-9464-219ea8261944","Type":"ContainerDied","Data":"549e01ba3df2be5602508bc8e16e4b4ef6df8f8f9829242604f75f2d14ddd004"} Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.958387 4806 scope.go:117] "RemoveContainer" containerID="b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.958555 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtdkm" Nov 25 16:01:39 crc kubenswrapper[4806]: I1125 16:01:39.996449 4806 scope.go:117] "RemoveContainer" containerID="1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.001552 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.022075 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtdkm"] Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.032257 4806 scope.go:117] "RemoveContainer" containerID="2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.038530 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f77fb5e7-b393-4553-9464-219ea8261944-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.058130 4806 scope.go:117] "RemoveContainer" containerID="b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f" Nov 25 16:01:40 crc kubenswrapper[4806]: E1125 16:01:40.058730 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f\": container with ID starting with b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f not found: ID does not exist" containerID="b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.058782 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f"} err="failed to get container status \"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f\": rpc error: code = NotFound desc = could not find container \"b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f\": container with ID starting with b5c556fc71fb5a10a775c593a8560eca009bf06d0787b75ecdacf960eb2a9f5f not found: ID does not exist" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.058815 4806 scope.go:117] "RemoveContainer" containerID="1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c" Nov 25 16:01:40 crc kubenswrapper[4806]: E1125 16:01:40.059120 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c\": container with ID starting with 1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c not found: ID does not exist" containerID="1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.059147 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c"} err="failed to get container status \"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c\": rpc error: code = NotFound desc = could not find container \"1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c\": container with ID starting with 1bdfefe20a157d1103ea18bf325f298d4609463df65093f1bcf72359f8ed253c not found: ID does not exist" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.059163 4806 scope.go:117] "RemoveContainer" containerID="2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02" Nov 25 16:01:40 crc kubenswrapper[4806]: E1125 16:01:40.059430 4806 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02\": container with ID starting with 2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02 not found: ID does not exist" containerID="2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.059460 4806 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02"} err="failed to get container status \"2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02\": rpc error: code = NotFound desc = could not find container \"2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02\": container with ID starting with 2486e79962271c2ec2fda0157d22f3d4acdf761fe55c86b6f22900a00dea1f02 not found: ID does not exist" Nov 25 16:01:40 crc kubenswrapper[4806]: I1125 16:01:40.102860 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f77fb5e7-b393-4553-9464-219ea8261944" path="/var/lib/kubelet/pods/f77fb5e7-b393-4553-9464-219ea8261944/volumes" Nov 25 16:01:43 crc kubenswrapper[4806]: I1125 16:01:43.089239 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:01:43 crc kubenswrapper[4806]: E1125 16:01:43.090068 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.301076 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.301940 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.301959 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.301980 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="extract-content" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.301988 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="extract-content" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302006 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="extract-utilities" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302015 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="extract-utilities" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302039 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302047 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302059 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="extract-utilities" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302066 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="extract-utilities" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302081 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="extract-content" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302091 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="extract-content" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302110 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7a91b74-ad99-4159-9bea-374d0734af57" containerName="keystone-cron" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302117 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7a91b74-ad99-4159-9bea-374d0734af57" containerName="keystone-cron" Nov 25 16:01:46 crc kubenswrapper[4806]: E1125 16:01:46.302133 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e01779d7-9369-4952-ae6f-af5618f075ef" containerName="container-00" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.302141 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="e01779d7-9369-4952-ae6f-af5618f075ef" containerName="container-00" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.303175 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="f77fb5e7-b393-4553-9464-219ea8261944" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.303208 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e01779d7-9369-4952-ae6f-af5618f075ef" containerName="container-00" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.303217 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="101c3770-ec02-44a6-8020-c77559ce5959" containerName="registry-server" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.303230 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7a91b74-ad99-4159-9bea-374d0734af57" containerName="keystone-cron" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.304898 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.313457 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.370773 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.371065 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.371371 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnzl\" (UniqueName: \"kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.474076 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnzl\" (UniqueName: \"kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.474779 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.474974 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.475416 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.475427 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.549110 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnzl\" (UniqueName: \"kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl\") pod \"redhat-marketplace-f52mp\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:46 crc kubenswrapper[4806]: I1125 16:01:46.626759 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:47 crc kubenswrapper[4806]: I1125 16:01:47.215106 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:01:48 crc kubenswrapper[4806]: I1125 16:01:48.046149 4806 generic.go:334] "Generic (PLEG): container finished" podID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerID="fc189be659f12c2c10a15be139ffdfb8ea35e6bd66f7adacb495357a3df133e5" exitCode=0 Nov 25 16:01:48 crc kubenswrapper[4806]: I1125 16:01:48.046191 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerDied","Data":"fc189be659f12c2c10a15be139ffdfb8ea35e6bd66f7adacb495357a3df133e5"} Nov 25 16:01:48 crc kubenswrapper[4806]: I1125 16:01:48.048691 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerStarted","Data":"016fa900175c319a9a183c15c3f41afdccd4a481b1a1df461bce394d6f790b73"} Nov 25 16:01:51 crc kubenswrapper[4806]: I1125 16:01:51.081929 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerStarted","Data":"e102e17123d9bd3f186373e2add323af899dba20c134ee2b2d51a6fdab4e5ad1"} Nov 25 16:01:53 crc kubenswrapper[4806]: I1125 16:01:53.107126 4806 generic.go:334] "Generic (PLEG): container finished" podID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerID="e102e17123d9bd3f186373e2add323af899dba20c134ee2b2d51a6fdab4e5ad1" exitCode=0 Nov 25 16:01:53 crc kubenswrapper[4806]: I1125 16:01:53.107205 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerDied","Data":"e102e17123d9bd3f186373e2add323af899dba20c134ee2b2d51a6fdab4e5ad1"} Nov 25 16:01:55 crc kubenswrapper[4806]: I1125 16:01:55.135878 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerStarted","Data":"d408f92c7726d4b41f6dc9cdf92a419f52fe90cfd7664ae23e5e64340dbfe78d"} Nov 25 16:01:56 crc kubenswrapper[4806]: I1125 16:01:56.089937 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:01:56 crc kubenswrapper[4806]: E1125 16:01:56.090282 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:01:56 crc kubenswrapper[4806]: I1125 16:01:56.627580 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:56 crc kubenswrapper[4806]: I1125 16:01:56.627920 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:56 crc kubenswrapper[4806]: I1125 16:01:56.676026 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:01:56 crc kubenswrapper[4806]: I1125 16:01:56.705771 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-f52mp" podStartSLOduration=4.277488641 podStartE2EDuration="10.705747927s" podCreationTimestamp="2025-11-25 16:01:46 +0000 UTC" firstStartedPulling="2025-11-25 16:01:48.048079429 +0000 UTC m=+4140.700221840" lastFinishedPulling="2025-11-25 16:01:54.476338715 +0000 UTC m=+4147.128481126" observedRunningTime="2025-11-25 16:01:55.165912375 +0000 UTC m=+4147.818054796" watchObservedRunningTime="2025-11-25 16:01:56.705747927 +0000 UTC m=+4149.357890338" Nov 25 16:02:01 crc kubenswrapper[4806]: I1125 16:02:01.810187 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/init-config-reloader/0.log" Nov 25 16:02:01 crc kubenswrapper[4806]: I1125 16:02:01.998612 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/alertmanager/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.033308 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/config-reloader/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.220599 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b5fbf57f8-jxhqp_cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81/barbican-api-log/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.241756 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5b5fbf57f8-jxhqp_cfd9535d-9d9c-4c54-b4eb-ba393eaf2d81/barbican-api/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.427272 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_82ed644a-fbd9-4ccc-a348-37293a1795f5/init-config-reloader/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.428441 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc7bb5d48-xzkml_322cf975-d195-44f0-b652-909080e6c2f2/barbican-keystone-listener/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.642768 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66468c84c9-dpswk_9cc24510-0ee6-451a-ae1e-6c057d860972/barbican-worker-log/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.696307 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-66468c84c9-dpswk_9cc24510-0ee6-451a-ae1e-6c057d860972/barbican-worker/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.723009 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc7bb5d48-xzkml_322cf975-d195-44f0-b652-909080e6c2f2/barbican-keystone-listener-log/0.log" Nov 25 16:02:02 crc kubenswrapper[4806]: I1125 16:02:02.973566 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-kn8kt_1e02aa69-d4ed-4a30-8c3f-2fe2021298d1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.367440 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/proxy-httpd/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.386579 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/sg-core/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.404725 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/ceilometer-notification-agent/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.490133 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_1bcbd53d-ee92-4a92-a28a-a3eef0e9d94b/ceilometer-central-agent/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.668696 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d875dfe1-f943-4577-afd4-e301920efac6/cinder-api/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.704703 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_d875dfe1-f943-4577-afd4-e301920efac6/cinder-api-log/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.965849 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a6efd5be-f7be-4981-aa85-710e9a0b3dc7/probe/0.log" Nov 25 16:02:03 crc kubenswrapper[4806]: I1125 16:02:03.971159 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_a6efd5be-f7be-4981-aa85-710e9a0b3dc7/cinder-scheduler/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.171159 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_e447777b-718e-4152-a9ac-9f6d8885345f/cloudkitty-api/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.275976 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_e447777b-718e-4152-a9ac-9f6d8885345f/cloudkitty-api-log/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.318939 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_b6ecb712-3cf0-4cd4-b823-0ffd452437ce/loki-compactor/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.502188 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-76cc998948-fxwbg_a1a1861d-9755-4f0b-8644-37e0e35584e1/gateway/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.550085 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-56cd74f89f-bs2h7_4c17fab0-86a8-4e8b-b790-c0a9c91979a3/loki-distributor/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.606170 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-76cc998948-gbg2h_1b3c25ba-4426-45b4-8f79-95fd0e07823b/gateway/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.900589 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_b61e9f82-3559-4710-8b06-4bc2c5997224/loki-index-gateway/0.log" Nov 25 16:02:04 crc kubenswrapper[4806]: I1125 16:02:04.983625 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_cdc49832-6f51-4954-ab25-3f84f6956d1f/loki-ingester/0.log" Nov 25 16:02:05 crc kubenswrapper[4806]: I1125 16:02:05.171108 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-779849886d-mzf6h_f0dc94d5-1470-40f4-8969-84c9690164c8/loki-query-frontend/0.log" Nov 25 16:02:05 crc kubenswrapper[4806]: I1125 16:02:05.334525 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-548665d79b-vt8jx_39c749dc-99ca-45d4-b49a-3e8925e0230a/loki-querier/0.log" Nov 25 16:02:05 crc kubenswrapper[4806]: I1125 16:02:05.527038 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-wwj6w_5ab11811-773f-477f-bb49-59c8dacf771f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:05 crc kubenswrapper[4806]: I1125 16:02:05.648471 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-kb9n4_9c0f0294-9956-4bf5-a1c3-2f7010c70008/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:05 crc kubenswrapper[4806]: I1125 16:02:05.778792 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/init/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.014923 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kzwcj_e47040af-0961-465d-a57d-b5a86d51d814/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.107682 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/dnsmasq-dns/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.142778 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-msc97_b20c2934-99f8-4a7e-aa11-2cb645cec451/init/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.469977 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_125263e2-6d79-4c36-be67-2dd333e3dff5/glance-log/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.547269 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_125263e2-6d79-4c36-be67-2dd333e3dff5/glance-httpd/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.692160 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.708527 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_314b444d-00a5-4e80-bc69-07ae78a84ad8/glance-httpd/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.756949 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.780988 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_314b444d-00a5-4e80-bc69-07ae78a84ad8/glance-log/0.log" Nov 25 16:02:06 crc kubenswrapper[4806]: I1125 16:02:06.926876 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-l96zm_d4de18d0-1ee6-4e6e-a3c5-5a44b4ee8a0b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.083033 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-dt7mk_5874b1c9-f997-4c96-b5a4-b012416932ba/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.276561 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-f52mp" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="registry-server" containerID="cri-o://d408f92c7726d4b41f6dc9cdf92a419f52fe90cfd7664ae23e5e64340dbfe78d" gracePeriod=2 Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.336793 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401441-wg2wq_e7a91b74-ad99-4159-9bea-374d0734af57/keystone-cron/0.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.527491 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9c050b95-eb84-4171-a52c-ee1e4614c301/kube-state-metrics/3.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.621746 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-8486684b84-snnmc_73a2b4d6-a670-4f80-b8a1-9e2ac7c8bae5/keystone-api/0.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.636691 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9c050b95-eb84-4171-a52c-ee1e4614c301/kube-state-metrics/2.log" Nov 25 16:02:07 crc kubenswrapper[4806]: I1125 16:02:07.736794 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gdntk_63e0c8ca-cbfc-476a-b68a-00b39c2a7a47/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.098109 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:02:08 crc kubenswrapper[4806]: E1125 16:02:08.098437 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.317169 4806 generic.go:334] "Generic (PLEG): container finished" podID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerID="d408f92c7726d4b41f6dc9cdf92a419f52fe90cfd7664ae23e5e64340dbfe78d" exitCode=0 Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.317232 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerDied","Data":"d408f92c7726d4b41f6dc9cdf92a419f52fe90cfd7664ae23e5e64340dbfe78d"} Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.329213 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5546966469-bclkx_5c1bd1be-9aa3-4444-a30c-1a3926c79b49/neutron-httpd/0.log" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.397366 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5546966469-bclkx_5c1bd1be-9aa3-4444-a30c-1a3926c79b49/neutron-api/0.log" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.549882 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-bcd4s_5b01cee4-68ad-4117-9841-8dea2142524a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.616921 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.756207 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnzl\" (UniqueName: \"kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl\") pod \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.756462 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities\") pod \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.756517 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content\") pod \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\" (UID: \"614984cf-eae1-4a83-bb79-ac6f3ee951f4\") " Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.757587 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities" (OuterVolumeSpecName: "utilities") pod "614984cf-eae1-4a83-bb79-ac6f3ee951f4" (UID: "614984cf-eae1-4a83-bb79-ac6f3ee951f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.764367 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl" (OuterVolumeSpecName: "kube-api-access-mqnzl") pod "614984cf-eae1-4a83-bb79-ac6f3ee951f4" (UID: "614984cf-eae1-4a83-bb79-ac6f3ee951f4"). InnerVolumeSpecName "kube-api-access-mqnzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.781724 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "614984cf-eae1-4a83-bb79-ac6f3ee951f4" (UID: "614984cf-eae1-4a83-bb79-ac6f3ee951f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.859029 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqnzl\" (UniqueName: \"kubernetes.io/projected/614984cf-eae1-4a83-bb79-ac6f3ee951f4-kube-api-access-mqnzl\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.859059 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:08 crc kubenswrapper[4806]: I1125 16:02:08.859068 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/614984cf-eae1-4a83-bb79-ac6f3ee951f4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.174512 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251/nova-api-log/0.log" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.334579 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-f52mp" event={"ID":"614984cf-eae1-4a83-bb79-ac6f3ee951f4","Type":"ContainerDied","Data":"016fa900175c319a9a183c15c3f41afdccd4a481b1a1df461bce394d6f790b73"} Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.334627 4806 scope.go:117] "RemoveContainer" containerID="d408f92c7726d4b41f6dc9cdf92a419f52fe90cfd7664ae23e5e64340dbfe78d" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.334684 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-f52mp" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.358792 4806 scope.go:117] "RemoveContainer" containerID="e102e17123d9bd3f186373e2add323af899dba20c134ee2b2d51a6fdab4e5ad1" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.383430 4806 scope.go:117] "RemoveContainer" containerID="fc189be659f12c2c10a15be139ffdfb8ea35e6bd66f7adacb495357a3df133e5" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.396412 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.411673 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-f52mp"] Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.494708 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_2e27c6b8-d0b8-43a7-a3ee-2f3703315a7b/nova-cell0-conductor-conductor/0.log" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.530059 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_2cbc26a8-c7dd-4d9d-bc1b-32e593fc6251/nova-api-api/0.log" Nov 25 16:02:09 crc kubenswrapper[4806]: I1125 16:02:09.833050 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_d3f3eddf-31e1-4923-b0e1-1245f37ea5b8/nova-cell1-conductor-conductor/0.log" Nov 25 16:02:10 crc kubenswrapper[4806]: I1125 16:02:10.040816 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f96a2277-fc94-465c-beae-9461e69ef4e3/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4806]: I1125 16:02:10.103122 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" path="/var/lib/kubelet/pods/614984cf-eae1-4a83-bb79-ac6f3ee951f4/volumes" Nov 25 16:02:10 crc kubenswrapper[4806]: I1125 16:02:10.212074 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-qvk7r_dc945807-33cb-4f78-9fed-c65adc25aeef/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:10 crc kubenswrapper[4806]: I1125 16:02:10.477499 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a41e572-3193-4163-81ab-e3ee7b072461/nova-metadata-log/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.266991 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e6705187-ba84-405e-9d7a-6e3b97e1b9f3/nova-scheduler-scheduler/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.512037 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/mysql-bootstrap/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.750291 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/mysql-bootstrap/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.752301 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0c667706-daaf-4283-9ebb-64bae95b4914/galera/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.894692 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_69ec7b50-f06b-4a12-8c24-8781116d0604/cloudkitty-proc/0.log" Nov 25 16:02:11 crc kubenswrapper[4806]: I1125 16:02:11.980578 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/mysql-bootstrap/0.log" Nov 25 16:02:12 crc kubenswrapper[4806]: I1125 16:02:12.115665 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/mysql-bootstrap/0.log" Nov 25 16:02:12 crc kubenswrapper[4806]: I1125 16:02:12.185690 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_fc946fac-46fb-45c0-8a69-2e481bf9d947/galera/0.log" Nov 25 16:02:12 crc kubenswrapper[4806]: I1125 16:02:12.315837 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_3e62db5f-8827-474f-9dc5-654aaa347996/openstackclient/0.log" Nov 25 16:02:12 crc kubenswrapper[4806]: I1125 16:02:12.603522 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-l6mv2_c90d07c6-4f04-48d1-ae1f-bb15f60ba44b/ovn-controller/0.log" Nov 25 16:02:12 crc kubenswrapper[4806]: I1125 16:02:12.792253 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_0a41e572-3193-4163-81ab-e3ee7b072461/nova-metadata-metadata/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.213771 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dhcsq_cb8eb50b-2bea-43d0-b0b6-698bc3709b1d/openstack-network-exporter/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.274196 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server-init/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.511830 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovs-vswitchd/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.557191 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server-init/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.589869 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-svmbm_0ebac08b-471e-4b28-98fb-b9bab2e3f505/ovsdb-server/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.730609 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-xjxhn_69414d23-6d19-459c-8930-73ad33dd73e5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.918996 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fb15262a-cd0a-45e1-b1c4-9d5221f2e707/ovn-northd/0.log" Nov 25 16:02:13 crc kubenswrapper[4806]: I1125 16:02:13.941086 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fb15262a-cd0a-45e1-b1c4-9d5221f2e707/openstack-network-exporter/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.131882 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec42948f-25cf-4ae0-8553-dfd5dcc43021/openstack-network-exporter/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.245037 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_ec42948f-25cf-4ae0-8553-dfd5dcc43021/ovsdbserver-nb/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.286231 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2235e648-6ec4-4d98-a879-46f4f56b93e0/openstack-network-exporter/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.363352 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2235e648-6ec4-4d98-a879-46f4f56b93e0/ovsdbserver-sb/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.529153 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c84b48b46-vlp89_fac79279-6dad-4f14-8e06-4d705d8f552d/placement-api/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.636098 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6c84b48b46-vlp89_fac79279-6dad-4f14-8e06-4d705d8f552d/placement-log/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.700947 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/init-config-reloader/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.982190 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/init-config-reloader/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.982369 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/config-reloader/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.992951 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/prometheus/0.log" Nov 25 16:02:14 crc kubenswrapper[4806]: I1125 16:02:14.994539 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_aafcef1f-4988-49d1-88f0-47a44d8f18fc/thanos-sidecar/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.348603 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/setup-container/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.461632 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/setup-container/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.545774 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f89c7d3f-93e9-464e-bf10-a2df33402031/rabbitmq/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.628252 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/setup-container/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.838540 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/rabbitmq/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.904082 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_94eec7e9-06e0-4096-8b0e-89a012fb3495/setup-container/0.log" Nov 25 16:02:15 crc kubenswrapper[4806]: I1125 16:02:15.914256 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-f9cmk_2f849708-31fc-45af-8eb8-75bd30094be9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.197329 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-jm5z5_2cd3c61a-f9b2-4746-ba1d-226aea23d908/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.223241 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-5hk27_4a338892-2bb8-41bf-aae0-d726d31e76b3/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.524848 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-dtz44_6ab72e48-ad31-4614-a3a0-44f0dd9762a9/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.537741 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-qbtlz_0d16f874-9406-497e-ad89-6e5ce5c109f5/ssh-known-hosts-edpm-deployment/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.737542 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d6dfc6f67-wrhhk_3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0/proxy-server/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.913885 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d6dfc6f67-wrhhk_3e3bc8a0-7f1e-4d10-95f8-9f44ec36a5e0/proxy-httpd/0.log" Nov 25 16:02:16 crc kubenswrapper[4806]: I1125 16:02:16.925677 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wpqhp_998fc00a-139c-4c9a-9765-a445527be5aa/swift-ring-rebalance/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.070350 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-auditor/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.158334 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-reaper/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.159228 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-replicator/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.270876 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/account-server/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.285065 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-auditor/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.400633 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-server/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.443671 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-replicator/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.649223 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/container-updater/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.668203 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-auditor/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.791756 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-expirer/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.802296 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-replicator/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.911950 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-updater/0.log" Nov 25 16:02:17 crc kubenswrapper[4806]: I1125 16:02:17.981437 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/object-server/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.039113 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/swift-recon-cron/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.057220 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_837cf2fb-8640-4ac3-ad91-84ff1dba54e6/rsync/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.303374 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2ac30dde-ccba-4cb3-a2e4-540d47610c83/tempest-tests-tempest-tests-runner/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.311595 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-ffzw4_6e3bb0ce-18a1-49d0-aff6-4d45985913a6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.578096 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_91a15fb4-157c-42c7-b66c-107db1dcd4cf/test-operator-logs-container/0.log" Nov 25 16:02:18 crc kubenswrapper[4806]: I1125 16:02:18.628921 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-59ddg_dc9534cb-ed46-40c5-918b-d20679144d6f/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:02:23 crc kubenswrapper[4806]: I1125 16:02:23.089070 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:02:24 crc kubenswrapper[4806]: I1125 16:02:24.499380 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439"} Nov 25 16:02:27 crc kubenswrapper[4806]: I1125 16:02:27.685629 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_31cd92ea-0a03-4883-9d96-532a9d5c3bd0/memcached/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.141138 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-qk9m2_537dc134-0732-4dfc-b0be-9c16d3d191be/kube-rbac-proxy/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.192882 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-qk9m2_537dc134-0732-4dfc-b0be-9c16d3d191be/manager/2.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.343863 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-qk9m2_537dc134-0732-4dfc-b0be-9c16d3d191be/manager/1.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.392174 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-w6686_40a580de-1093-4adc-a98c-e18202bee9e3/kube-rbac-proxy/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.458263 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-w6686_40a580de-1093-4adc-a98c-e18202bee9e3/manager/2.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.580922 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-w6686_40a580de-1093-4adc-a98c-e18202bee9e3/manager/1.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.650444 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.836344 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.853054 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 16:02:48 crc kubenswrapper[4806]: I1125 16:02:48.857987 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.085144 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/extract/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.107970 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/util/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.253566 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_d1c59f685edf6520a02ef8c247b7f80eade9502e4f411b8324f7c9afb04dwsg_916f8aac-10d3-4065-89bc-1d935732c91e/pull/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.306852 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wfsxk_de253966-f7ff-485f-8108-b8ee0fd795bf/kube-rbac-proxy/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.365672 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wfsxk_de253966-f7ff-485f-8108-b8ee0fd795bf/manager/2.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.397640 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-wfsxk_de253966-f7ff-485f-8108-b8ee0fd795bf/manager/1.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.535090 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r8dnj_fbf78fa8-8b88-454e-a7dc-0e75f463bc45/kube-rbac-proxy/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.589629 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r8dnj_fbf78fa8-8b88-454e-a7dc-0e75f463bc45/manager/2.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.640551 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r8dnj_fbf78fa8-8b88-454e-a7dc-0e75f463bc45/manager/1.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.734301 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-jcrbm_8294cfe0-6c14-49bc-bd5b-d614a68893ce/kube-rbac-proxy/0.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.856884 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-jcrbm_8294cfe0-6c14-49bc-bd5b-d614a68893ce/manager/2.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.919117 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-jcrbm_8294cfe0-6c14-49bc-bd5b-d614a68893ce/manager/1.log" Nov 25 16:02:49 crc kubenswrapper[4806]: I1125 16:02:49.994638 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-h9qg8_461ceb26-b86c-4bb8-9550-131351dfa3e5/kube-rbac-proxy/0.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.081831 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-h9qg8_461ceb26-b86c-4bb8-9550-131351dfa3e5/manager/2.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.117934 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-h9qg8_461ceb26-b86c-4bb8-9550-131351dfa3e5/manager/1.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.203343 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-xlzgr_e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329/kube-rbac-proxy/0.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.259662 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-xlzgr_e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329/manager/3.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.361171 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-xlzgr_e8acc0a9-7e17-4cc4-bcc6-fdd9616f0329/manager/2.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.377254 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-q6z52_ec8a3bcc-2127-44bc-8f89-db3ece24a9b9/kube-rbac-proxy/0.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.561573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-q6z52_ec8a3bcc-2127-44bc-8f89-db3ece24a9b9/manager/2.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.619536 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-q6z52_ec8a3bcc-2127-44bc-8f89-db3ece24a9b9/manager/1.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.647540 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-w5r5m_61457634-dc4d-4ad9-9bdc-c95aae5df022/kube-rbac-proxy/0.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.829733 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-w5r5m_61457634-dc4d-4ad9-9bdc-c95aae5df022/manager/3.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.853573 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-w5r5m_61457634-dc4d-4ad9-9bdc-c95aae5df022/manager/2.log" Nov 25 16:02:50 crc kubenswrapper[4806]: I1125 16:02:50.951983 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-bwwh4_9cc0ebc5-e3d4-4bae-8b33-032d950705ff/kube-rbac-proxy/0.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.020183 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-bwwh4_9cc0ebc5-e3d4-4bae-8b33-032d950705ff/manager/2.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.099800 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-bwwh4_9cc0ebc5-e3d4-4bae-8b33-032d950705ff/manager/1.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.191966 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-9thxp_c1159ae9-b734-4012-b746-35d037ee4817/kube-rbac-proxy/0.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.232585 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-9thxp_c1159ae9-b734-4012-b746-35d037ee4817/manager/3.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.367907 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-9thxp_c1159ae9-b734-4012-b746-35d037ee4817/manager/2.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.435235 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c5xhr_d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7/kube-rbac-proxy/0.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.457937 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c5xhr_d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7/manager/2.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.564054 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-c5xhr_d2f4f05a-5ae5-4f49-87f2-a1e642ee0ac7/manager/1.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.623766 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-wfhhn_63efe3dc-03df-4494-9661-9a23a89c0974/kube-rbac-proxy/0.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.690548 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-wfhhn_63efe3dc-03df-4494-9661-9a23a89c0974/manager/3.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.773144 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-wfhhn_63efe3dc-03df-4494-9661-9a23a89c0974/manager/2.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.848174 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cqwgq_2a080dd6-0904-4756-8b02-39d10465fea2/kube-rbac-proxy/0.log" Nov 25 16:02:51 crc kubenswrapper[4806]: I1125 16:02:51.903516 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cqwgq_2a080dd6-0904-4756-8b02-39d10465fea2/manager/2.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.024480 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-cqwgq_2a080dd6-0904-4756-8b02-39d10465fea2/manager/1.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.103310 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g_b3220f94-14c9-4820-9d1b-6b4bb1b635fd/kube-rbac-proxy/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.123843 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g_b3220f94-14c9-4820-9d1b-6b4bb1b635fd/manager/1.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.227468 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-d7m7g_b3220f94-14c9-4820-9d1b-6b4bb1b635fd/manager/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.336143 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7c468db9ff-2r8gr_b97ff802-8b8f-47d4-bff1-7d6876f780ff/manager/2.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.460433 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7c468db9ff-2r8gr_b97ff802-8b8f-47d4-bff1-7d6876f780ff/manager/3.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.476394 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-779bfcf6cb-zxvzf_8fe87500-5164-48de-a495-f6d74b05b7f9/operator/1.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.621843 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-csjwd_54ffd9a7-4d3c-4e19-855a-8f54e7d9d513/registry-server/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.731110 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-779bfcf6cb-zxvzf_8fe87500-5164-48de-a495-f6d74b05b7f9/operator/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.759975 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tzsbk_9dc1bbe2-49c1-4601-9acf-b1887426fdd0/kube-rbac-proxy/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.892621 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tzsbk_9dc1bbe2-49c1-4601-9acf-b1887426fdd0/manager/2.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.893630 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-tzsbk_9dc1bbe2-49c1-4601-9acf-b1887426fdd0/manager/3.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.967076 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-fxzwv_24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b/kube-rbac-proxy/0.log" Nov 25 16:02:52 crc kubenswrapper[4806]: I1125 16:02:52.984318 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-fxzwv_24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b/manager/3.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.144742 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-fxzwv_24cfe3fd-9b1a-4b9a-9b99-1b089fa2124b/manager/2.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.189604 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2snr9_fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30/operator/3.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.193690 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2snr9_fd7fd3ac-d6f9-4f62-9cbd-e6a28b88be30/operator/2.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.327009 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pxx5w_1df7970b-bed8-4e27-b04b-66e513683875/kube-rbac-proxy/0.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.370640 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pxx5w_1df7970b-bed8-4e27-b04b-66e513683875/manager/3.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.401971 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-pxx5w_1df7970b-bed8-4e27-b04b-66e513683875/manager/2.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.411524 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-687f46fc78-xdmx6_dbedcc0b-12de-4497-a9f3-a9df6c88a74f/kube-rbac-proxy/0.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.546535 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-687f46fc78-xdmx6_dbedcc0b-12de-4497-a9f3-a9df6c88a74f/manager/1.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.564403 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-687f46fc78-xdmx6_dbedcc0b-12de-4497-a9f3-a9df6c88a74f/manager/2.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.613057 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wnx44_4877ab9d-8cd3-4270-915f-c73167e93b49/kube-rbac-proxy/0.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.659892 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wnx44_4877ab9d-8cd3-4270-915f-c73167e93b49/manager/1.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.729468 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-wnx44_4877ab9d-8cd3-4270-915f-c73167e93b49/manager/0.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.808522 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b7g79_023302d1-a345-4f55-9ac1-4a2b674e36aa/kube-rbac-proxy/0.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.835624 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b7g79_023302d1-a345-4f55-9ac1-4a2b674e36aa/manager/3.log" Nov 25 16:02:53 crc kubenswrapper[4806]: I1125 16:02:53.909571 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-b7g79_023302d1-a345-4f55-9ac1-4a2b674e36aa/manager/2.log" Nov 25 16:03:15 crc kubenswrapper[4806]: I1125 16:03:15.141273 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-6hqx6_7f5cd5de-2e48-4c15-9c5e-f20368bc172b/control-plane-machine-set-operator/0.log" Nov 25 16:03:15 crc kubenswrapper[4806]: I1125 16:03:15.345552 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9tjs2_f394b01a-b495-4acf-bca9-0b23347a3358/kube-rbac-proxy/0.log" Nov 25 16:03:15 crc kubenswrapper[4806]: I1125 16:03:15.370873 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-9tjs2_f394b01a-b495-4acf-bca9-0b23347a3358/machine-api-operator/0.log" Nov 25 16:03:21 crc kubenswrapper[4806]: I1125 16:03:21.951933 4806 scope.go:117] "RemoveContainer" containerID="b7103ed585d99e3a327b47baf2230d1b0d88e79840534538d4a427f89b92c797" Nov 25 16:03:30 crc kubenswrapper[4806]: I1125 16:03:30.355707 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-2nhx4_95b3b0c2-b552-4f25-803e-f2ae9d53add8/cert-manager-controller/1.log" Nov 25 16:03:30 crc kubenswrapper[4806]: I1125 16:03:30.394152 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-2nhx4_95b3b0c2-b552-4f25-803e-f2ae9d53add8/cert-manager-controller/0.log" Nov 25 16:03:30 crc kubenswrapper[4806]: I1125 16:03:30.718198 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-mw4xn_9914c048-9845-4535-97d5-2833b53b84d3/cert-manager-cainjector/0.log" Nov 25 16:03:30 crc kubenswrapper[4806]: I1125 16:03:30.789833 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-jssct_672c5c0d-1d2d-4e3e-bccf-6f8fd25f98ae/cert-manager-webhook/0.log" Nov 25 16:03:47 crc kubenswrapper[4806]: I1125 16:03:47.626738 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-glshj_d7da5810-18e1-4ece-a8d1-a3a7f9c710a4/nmstate-console-plugin/0.log" Nov 25 16:03:48 crc kubenswrapper[4806]: I1125 16:03:48.377675 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-b4tpl_58a03ccb-63cd-45fe-bc04-71fcc12c3434/kube-rbac-proxy/0.log" Nov 25 16:03:48 crc kubenswrapper[4806]: I1125 16:03:48.378520 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-8n9rx_ef57a24c-25d4-481a-8047-af60faef1f37/nmstate-handler/0.log" Nov 25 16:03:48 crc kubenswrapper[4806]: I1125 16:03:48.415505 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-b4tpl_58a03ccb-63cd-45fe-bc04-71fcc12c3434/nmstate-metrics/0.log" Nov 25 16:03:48 crc kubenswrapper[4806]: I1125 16:03:48.609431 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-n8ld5_831b49c5-f5fa-4186-8bd0-25b5a3e76a45/nmstate-webhook/0.log" Nov 25 16:03:48 crc kubenswrapper[4806]: I1125 16:03:48.646511 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-b2jcn_63efa58c-1fdc-46b7-ba63-94effc1543d0/nmstate-operator/0.log" Nov 25 16:04:03 crc kubenswrapper[4806]: I1125 16:04:03.128901 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/kube-rbac-proxy/0.log" Nov 25 16:04:03 crc kubenswrapper[4806]: I1125 16:04:03.275195 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/manager/1.log" Nov 25 16:04:03 crc kubenswrapper[4806]: I1125 16:04:03.407965 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/manager/0.log" Nov 25 16:04:20 crc kubenswrapper[4806]: I1125 16:04:20.770208 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fv59r_66652e87-4308-4216-880d-bfba98261288/kube-rbac-proxy/0.log" Nov 25 16:04:20 crc kubenswrapper[4806]: I1125 16:04:20.923788 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-fv59r_66652e87-4308-4216-880d-bfba98261288/controller/0.log" Nov 25 16:04:21 crc kubenswrapper[4806]: I1125 16:04:21.499205 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-frr-files/0.log" Nov 25 16:04:21 crc kubenswrapper[4806]: I1125 16:04:21.549839 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-frr-files/0.log" Nov 25 16:04:21 crc kubenswrapper[4806]: I1125 16:04:21.621791 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-reloader/0.log" Nov 25 16:04:21 crc kubenswrapper[4806]: I1125 16:04:21.798806 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-metrics/0.log" Nov 25 16:04:21 crc kubenswrapper[4806]: I1125 16:04:21.824193 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-reloader/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.026646 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-frr-files/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.072096 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-reloader/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.095603 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-metrics/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.102578 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-metrics/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.326686 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-metrics/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.353857 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-reloader/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.364945 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/cp-frr-files/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.383106 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/controller/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.550586 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/frr-metrics/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.613670 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/kube-rbac-proxy/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.613749 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/kube-rbac-proxy-frr/0.log" Nov 25 16:04:22 crc kubenswrapper[4806]: I1125 16:04:22.961474 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/reloader/0.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.030713 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-j9plw_ccb2a08d-4f13-4e28-a6e2-1af712c00eaf/frr-k8s-webhook-server/0.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.239420 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-769f4c6fc-r7k57_55283d70-ea30-4f51-8583-6d1adc92cfcb/manager/3.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.576039 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-769f4c6fc-r7k57_55283d70-ea30-4f51-8583-6d1adc92cfcb/manager/2.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.645722 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58d556674f-758vc_a3fdc89c-e782-48b8-bfaa-f3bd81956672/webhook-server/0.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.946165 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9tr2r_eb6c6179-82f5-4796-a12a-4806c8df1edd/frr/0.log" Nov 25 16:04:23 crc kubenswrapper[4806]: I1125 16:04:23.963812 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2pzk8_809591af-3272-4a5d-bd90-d6cba5c6e3a0/kube-rbac-proxy/0.log" Nov 25 16:04:24 crc kubenswrapper[4806]: I1125 16:04:24.453636 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2pzk8_809591af-3272-4a5d-bd90-d6cba5c6e3a0/speaker/0.log" Nov 25 16:04:39 crc kubenswrapper[4806]: I1125 16:04:39.627334 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/util/0.log" Nov 25 16:04:39 crc kubenswrapper[4806]: I1125 16:04:39.819210 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/pull/0.log" Nov 25 16:04:39 crc kubenswrapper[4806]: I1125 16:04:39.838360 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/util/0.log" Nov 25 16:04:39 crc kubenswrapper[4806]: I1125 16:04:39.846602 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/pull/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.039659 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/pull/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.049204 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/util/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.087081 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_03c6e0f8bd928fdcaaf530d547155f7eef49635d3e29724a094c0ab69494r6p_93f1ff8c-0309-4dc7-b711-20157db2f5f3/extract/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.272435 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/util/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.531869 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/pull/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.534997 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/util/0.log" Nov 25 16:04:40 crc kubenswrapper[4806]: I1125 16:04:40.552231 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/pull/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.183001 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/util/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.207464 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/pull/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.463142 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_142e5edc705b0443a978f15b9d74db4e11d2db1d26a61e7f8c9e49e3038wvfb_a78134fd-b0fb-4f66-8d2c-a7e0d8cba9d2/extract/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.643948 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/util/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.820856 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/util/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.823210 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/pull/0.log" Nov 25 16:04:41 crc kubenswrapper[4806]: I1125 16:04:41.854825 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/pull/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.060942 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/extract/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.099464 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/util/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.134905 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772edvwgb_1085d309-de3f-424f-b793-c89655f9fb2d/pull/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.265986 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/util/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.435138 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/util/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.436818 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/pull/0.log" Nov 25 16:04:42 crc kubenswrapper[4806]: I1125 16:04:42.495187 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/pull/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.266235 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/extract/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.486912 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-utilities/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.691243 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-utilities/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.718617 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-content/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.788582 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-content/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.814022 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/util/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.839251 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210clkf2_eea848bf-e720-4a8e-bcc4-c3ff44ba44c0/pull/0.log" Nov 25 16:04:43 crc kubenswrapper[4806]: I1125 16:04:43.981266 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-utilities/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.007349 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/extract-content/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.295042 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-utilities/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.581916 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-content/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.596901 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-content/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.607913 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-utilities/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.788552 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-content/0.log" Nov 25 16:04:44 crc kubenswrapper[4806]: I1125 16:04:44.795146 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/extract-utilities/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.013871 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/util/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.310172 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/util/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.370434 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/pull/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.383916 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/pull/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.401633 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x44z7_aab8bc77-d4ee-431c-986b-768bf3c5e139/registry-server/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.630516 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-gc92n_6be68968-ad7e-458f-98a6-f3625aecb774/registry-server/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.640260 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/util/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.696139 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/pull/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.858172 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6zn85z_bac0466c-f1d6-4e60-999e-adbc6c533da8/extract/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.907831 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-utilities/0.log" Nov 25 16:04:45 crc kubenswrapper[4806]: I1125 16:04:45.996273 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-rqc2s_257fb937-19f0-48d9-8ea3-7897f5405a87/marketplace-operator/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.153242 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-utilities/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.189643 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-content/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.207664 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-content/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.413612 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-utilities/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.496178 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/extract-content/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.502664 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-utilities/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.686374 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6mnqv_9619fb42-e746-4c18-82c8-9e55824d5199/registry-server/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.744209 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-content/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.784889 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-utilities/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.791489 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-content/0.log" Nov 25 16:04:46 crc kubenswrapper[4806]: I1125 16:04:46.942215 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-utilities/0.log" Nov 25 16:04:47 crc kubenswrapper[4806]: I1125 16:04:47.002428 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/extract-content/0.log" Nov 25 16:04:47 crc kubenswrapper[4806]: I1125 16:04:47.427373 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-77qk4_19d636cf-e82d-48c3-82db-321f0505c5ab/registry-server/0.log" Nov 25 16:04:48 crc kubenswrapper[4806]: I1125 16:04:48.934954 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:04:48 crc kubenswrapper[4806]: I1125 16:04:48.935930 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:05:02 crc kubenswrapper[4806]: I1125 16:05:02.487097 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-2s9fq_380a6ec0-8579-4cf8-bd81-52186962d2ed/prometheus-operator/0.log" Nov 25 16:05:02 crc kubenswrapper[4806]: I1125 16:05:02.663891 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f86c9868-b5qwr_028bbcd6-a8e8-470e-b603-6f7a1a68152d/prometheus-operator-admission-webhook/0.log" Nov 25 16:05:02 crc kubenswrapper[4806]: I1125 16:05:02.675590 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5f86c9868-q4k7j_85f5c34a-cdb4-41c5-8a01-766f57f85a0a/prometheus-operator-admission-webhook/0.log" Nov 25 16:05:03 crc kubenswrapper[4806]: I1125 16:05:03.756442 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-rtq62_2f9ca963-6005-48e0-9d0b-7e1c3dc7103e/operator/0.log" Nov 25 16:05:03 crc kubenswrapper[4806]: I1125 16:05:03.861520 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-dklz7_7fb6f239-ec10-48bd-bd37-c1afa567e809/perses-operator/0.log" Nov 25 16:05:16 crc kubenswrapper[4806]: I1125 16:05:16.145218 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/manager/0.log" Nov 25 16:05:16 crc kubenswrapper[4806]: I1125 16:05:16.173140 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/kube-rbac-proxy/0.log" Nov 25 16:05:16 crc kubenswrapper[4806]: I1125 16:05:16.273956 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-8b74fc76b-wflwn_2942b82c-e706-4f3e-ad7d-cef384dbcfba/manager/1.log" Nov 25 16:05:18 crc kubenswrapper[4806]: I1125 16:05:18.934562 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:05:18 crc kubenswrapper[4806]: I1125 16:05:18.935115 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:05:48 crc kubenswrapper[4806]: I1125 16:05:48.935039 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:05:48 crc kubenswrapper[4806]: I1125 16:05:48.935652 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:05:48 crc kubenswrapper[4806]: I1125 16:05:48.935704 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 16:05:48 crc kubenswrapper[4806]: I1125 16:05:48.936586 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 16:05:48 crc kubenswrapper[4806]: I1125 16:05:48.936640 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439" gracePeriod=600 Nov 25 16:05:49 crc kubenswrapper[4806]: I1125 16:05:49.745921 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439" exitCode=0 Nov 25 16:05:49 crc kubenswrapper[4806]: I1125 16:05:49.746089 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439"} Nov 25 16:05:49 crc kubenswrapper[4806]: I1125 16:05:49.746383 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerStarted","Data":"977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608"} Nov 25 16:05:49 crc kubenswrapper[4806]: I1125 16:05:49.746408 4806 scope.go:117] "RemoveContainer" containerID="05b6ee2a51d7372338008820486d422e9a505c74a3f4cee7ce748e653b9075de" Nov 25 16:07:07 crc kubenswrapper[4806]: I1125 16:07:07.654864 4806 generic.go:334] "Generic (PLEG): container finished" podID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerID="907a776d8feaf2c2eed2794924a7902c3020b9168988909e8010cb3b75d3d60b" exitCode=0 Nov 25 16:07:07 crc kubenswrapper[4806]: I1125 16:07:07.654944 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dht67/must-gather-lkxmb" event={"ID":"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9","Type":"ContainerDied","Data":"907a776d8feaf2c2eed2794924a7902c3020b9168988909e8010cb3b75d3d60b"} Nov 25 16:07:07 crc kubenswrapper[4806]: I1125 16:07:07.656061 4806 scope.go:117] "RemoveContainer" containerID="907a776d8feaf2c2eed2794924a7902c3020b9168988909e8010cb3b75d3d60b" Nov 25 16:07:08 crc kubenswrapper[4806]: I1125 16:07:08.415500 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dht67_must-gather-lkxmb_ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9/gather/0.log" Nov 25 16:07:20 crc kubenswrapper[4806]: I1125 16:07:20.508102 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dht67/must-gather-lkxmb"] Nov 25 16:07:20 crc kubenswrapper[4806]: I1125 16:07:20.508971 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dht67/must-gather-lkxmb" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="copy" containerID="cri-o://4b5609394cb2e0f2a26202a182e69a8ae0e723c92dc40a087920e2c6fbff27a9" gracePeriod=2 Nov 25 16:07:20 crc kubenswrapper[4806]: I1125 16:07:20.519221 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dht67/must-gather-lkxmb"] Nov 25 16:07:21 crc kubenswrapper[4806]: I1125 16:07:21.423022 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dht67_must-gather-lkxmb_ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9/copy/0.log" Nov 25 16:07:21 crc kubenswrapper[4806]: I1125 16:07:21.423458 4806 generic.go:334] "Generic (PLEG): container finished" podID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerID="4b5609394cb2e0f2a26202a182e69a8ae0e723c92dc40a087920e2c6fbff27a9" exitCode=143 Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.049115 4806 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dht67_must-gather-lkxmb_ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9/copy/0.log" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.049806 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.098676 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc52b\" (UniqueName: \"kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b\") pod \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.100556 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output\") pod \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\" (UID: \"ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9\") " Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.104508 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b" (OuterVolumeSpecName: "kube-api-access-bc52b") pod "ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" (UID: "ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9"). InnerVolumeSpecName "kube-api-access-bc52b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.154180 4806 scope.go:117] "RemoveContainer" containerID="907a776d8feaf2c2eed2794924a7902c3020b9168988909e8010cb3b75d3d60b" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.203453 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc52b\" (UniqueName: \"kubernetes.io/projected/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-kube-api-access-bc52b\") on node \"crc\" DevicePath \"\"" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.211049 4806 scope.go:117] "RemoveContainer" containerID="4b5609394cb2e0f2a26202a182e69a8ae0e723c92dc40a087920e2c6fbff27a9" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.269541 4806 scope.go:117] "RemoveContainer" containerID="d9fe6aee15f8e4740e6f83217eb91f1b9488f28ad89604fa0dadb8b153e8f9e7" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.293026 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" (UID: "ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.305060 4806 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 16:07:22 crc kubenswrapper[4806]: I1125 16:07:22.432826 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dht67/must-gather-lkxmb" Nov 25 16:07:24 crc kubenswrapper[4806]: I1125 16:07:24.101850 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" path="/var/lib/kubelet/pods/ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9/volumes" Nov 25 16:08:18 crc kubenswrapper[4806]: I1125 16:08:18.936264 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:08:18 crc kubenswrapper[4806]: I1125 16:08:18.936808 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:08:48 crc kubenswrapper[4806]: I1125 16:08:48.935056 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:08:48 crc kubenswrapper[4806]: I1125 16:08:48.935585 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:09:18 crc kubenswrapper[4806]: I1125 16:09:18.935730 4806 patch_prober.go:28] interesting pod/machine-config-daemon-kclf8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:09:18 crc kubenswrapper[4806]: I1125 16:09:18.936494 4806 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:09:18 crc kubenswrapper[4806]: I1125 16:09:18.936547 4806 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" Nov 25 16:09:18 crc kubenswrapper[4806]: I1125 16:09:18.937586 4806 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608"} pod="openshift-machine-config-operator/machine-config-daemon-kclf8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 16:09:18 crc kubenswrapper[4806]: I1125 16:09:18.937643 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerName="machine-config-daemon" containerID="cri-o://977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" gracePeriod=600 Nov 25 16:09:19 crc kubenswrapper[4806]: E1125 16:09:19.072088 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:09:19 crc kubenswrapper[4806]: I1125 16:09:19.740264 4806 generic.go:334] "Generic (PLEG): container finished" podID="39baff20-1e9a-48b1-8872-155c5ad5931d" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" exitCode=0 Nov 25 16:09:19 crc kubenswrapper[4806]: I1125 16:09:19.740308 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" event={"ID":"39baff20-1e9a-48b1-8872-155c5ad5931d","Type":"ContainerDied","Data":"977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608"} Nov 25 16:09:19 crc kubenswrapper[4806]: I1125 16:09:19.740352 4806 scope.go:117] "RemoveContainer" containerID="72051d852726a5c14a2394d688bd5080eb1e551bea11498fe7549f05508fb439" Nov 25 16:09:19 crc kubenswrapper[4806]: I1125 16:09:19.740904 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:09:19 crc kubenswrapper[4806]: E1125 16:09:19.741215 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:09:32 crc kubenswrapper[4806]: I1125 16:09:32.089505 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:09:32 crc kubenswrapper[4806]: E1125 16:09:32.090692 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.554833 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:42 crc kubenswrapper[4806]: E1125 16:09:42.556363 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="copy" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556383 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="copy" Nov 25 16:09:42 crc kubenswrapper[4806]: E1125 16:09:42.556400 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="extract-utilities" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556408 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="extract-utilities" Nov 25 16:09:42 crc kubenswrapper[4806]: E1125 16:09:42.556420 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="extract-content" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556430 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="extract-content" Nov 25 16:09:42 crc kubenswrapper[4806]: E1125 16:09:42.556476 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="gather" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556483 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="gather" Nov 25 16:09:42 crc kubenswrapper[4806]: E1125 16:09:42.556498 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="registry-server" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556505 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="registry-server" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556770 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="copy" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556784 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad0d414a-29d2-46a5-9aa0-ff43c48fc8f9" containerName="gather" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.556806 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="614984cf-eae1-4a83-bb79-ac6f3ee951f4" containerName="registry-server" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.558887 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.576615 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.735395 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.736210 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.736301 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5clts\" (UniqueName: \"kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.839011 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.839191 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.839254 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5clts\" (UniqueName: \"kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.839590 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.839707 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.860019 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5clts\" (UniqueName: \"kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts\") pod \"community-operators-kfpfp\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:42 crc kubenswrapper[4806]: I1125 16:09:42.891021 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:43 crc kubenswrapper[4806]: I1125 16:09:43.546095 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:44 crc kubenswrapper[4806]: I1125 16:09:44.058030 4806 generic.go:334] "Generic (PLEG): container finished" podID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerID="6bab79dec10422eaebce32093aa50c72214cbf0829ee90fb986c3d44bebe0e1a" exitCode=0 Nov 25 16:09:44 crc kubenswrapper[4806]: I1125 16:09:44.058130 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerDied","Data":"6bab79dec10422eaebce32093aa50c72214cbf0829ee90fb986c3d44bebe0e1a"} Nov 25 16:09:44 crc kubenswrapper[4806]: I1125 16:09:44.058330 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerStarted","Data":"8ac94348194340aef17a5bdaf1ab46b232c02d79913f728dc58c50f39fce194a"} Nov 25 16:09:44 crc kubenswrapper[4806]: I1125 16:09:44.064592 4806 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 16:09:44 crc kubenswrapper[4806]: I1125 16:09:44.090426 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:09:44 crc kubenswrapper[4806]: E1125 16:09:44.090787 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:09:46 crc kubenswrapper[4806]: I1125 16:09:46.100791 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerStarted","Data":"b755431d6b1dcf4388990af8f30339ff325e38ec51f69ea8165f0348668b28a3"} Nov 25 16:09:47 crc kubenswrapper[4806]: I1125 16:09:47.100718 4806 generic.go:334] "Generic (PLEG): container finished" podID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerID="b755431d6b1dcf4388990af8f30339ff325e38ec51f69ea8165f0348668b28a3" exitCode=0 Nov 25 16:09:47 crc kubenswrapper[4806]: I1125 16:09:47.100773 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerDied","Data":"b755431d6b1dcf4388990af8f30339ff325e38ec51f69ea8165f0348668b28a3"} Nov 25 16:09:48 crc kubenswrapper[4806]: I1125 16:09:48.113411 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerStarted","Data":"19c92106692a87b0a626c92620c0fd7657e4188efbfad89ca800bb426b126392"} Nov 25 16:09:48 crc kubenswrapper[4806]: I1125 16:09:48.133435 4806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kfpfp" podStartSLOduration=2.661521004 podStartE2EDuration="6.133417294s" podCreationTimestamp="2025-11-25 16:09:42 +0000 UTC" firstStartedPulling="2025-11-25 16:09:44.064360696 +0000 UTC m=+4616.716503107" lastFinishedPulling="2025-11-25 16:09:47.536256976 +0000 UTC m=+4620.188399397" observedRunningTime="2025-11-25 16:09:48.129983906 +0000 UTC m=+4620.782126317" watchObservedRunningTime="2025-11-25 16:09:48.133417294 +0000 UTC m=+4620.785559705" Nov 25 16:09:52 crc kubenswrapper[4806]: I1125 16:09:52.892085 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:52 crc kubenswrapper[4806]: I1125 16:09:52.892816 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:52 crc kubenswrapper[4806]: I1125 16:09:52.956308 4806 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:53 crc kubenswrapper[4806]: I1125 16:09:53.231787 4806 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:53 crc kubenswrapper[4806]: I1125 16:09:53.289182 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:55 crc kubenswrapper[4806]: I1125 16:09:55.193659 4806 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kfpfp" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="registry-server" containerID="cri-o://19c92106692a87b0a626c92620c0fd7657e4188efbfad89ca800bb426b126392" gracePeriod=2 Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.205459 4806 generic.go:334] "Generic (PLEG): container finished" podID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerID="19c92106692a87b0a626c92620c0fd7657e4188efbfad89ca800bb426b126392" exitCode=0 Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.205525 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerDied","Data":"19c92106692a87b0a626c92620c0fd7657e4188efbfad89ca800bb426b126392"} Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.206121 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kfpfp" event={"ID":"26342135-ef05-4da9-ab2c-26cc8c83bfa5","Type":"ContainerDied","Data":"8ac94348194340aef17a5bdaf1ab46b232c02d79913f728dc58c50f39fce194a"} Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.206155 4806 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac94348194340aef17a5bdaf1ab46b232c02d79913f728dc58c50f39fce194a" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.413123 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.532968 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content\") pod \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.533307 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities\") pod \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.533379 4806 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5clts\" (UniqueName: \"kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts\") pod \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\" (UID: \"26342135-ef05-4da9-ab2c-26cc8c83bfa5\") " Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.534502 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities" (OuterVolumeSpecName: "utilities") pod "26342135-ef05-4da9-ab2c-26cc8c83bfa5" (UID: "26342135-ef05-4da9-ab2c-26cc8c83bfa5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.542600 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts" (OuterVolumeSpecName: "kube-api-access-5clts") pod "26342135-ef05-4da9-ab2c-26cc8c83bfa5" (UID: "26342135-ef05-4da9-ab2c-26cc8c83bfa5"). InnerVolumeSpecName "kube-api-access-5clts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.636535 4806 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.636591 4806 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5clts\" (UniqueName: \"kubernetes.io/projected/26342135-ef05-4da9-ab2c-26cc8c83bfa5-kube-api-access-5clts\") on node \"crc\" DevicePath \"\"" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.837863 4806 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26342135-ef05-4da9-ab2c-26cc8c83bfa5" (UID: "26342135-ef05-4da9-ab2c-26cc8c83bfa5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:09:56 crc kubenswrapper[4806]: I1125 16:09:56.840497 4806 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26342135-ef05-4da9-ab2c-26cc8c83bfa5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:09:57 crc kubenswrapper[4806]: I1125 16:09:57.216174 4806 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kfpfp" Nov 25 16:09:57 crc kubenswrapper[4806]: I1125 16:09:57.265849 4806 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:57 crc kubenswrapper[4806]: I1125 16:09:57.277577 4806 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kfpfp"] Nov 25 16:09:58 crc kubenswrapper[4806]: I1125 16:09:58.096027 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:09:58 crc kubenswrapper[4806]: E1125 16:09:58.096549 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:09:58 crc kubenswrapper[4806]: I1125 16:09:58.100045 4806 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" path="/var/lib/kubelet/pods/26342135-ef05-4da9-ab2c-26cc8c83bfa5/volumes" Nov 25 16:10:10 crc kubenswrapper[4806]: I1125 16:10:10.089922 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:10:10 crc kubenswrapper[4806]: E1125 16:10:10.091139 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:10:25 crc kubenswrapper[4806]: I1125 16:10:25.089302 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:10:25 crc kubenswrapper[4806]: E1125 16:10:25.089981 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:10:37 crc kubenswrapper[4806]: I1125 16:10:37.090014 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:10:37 crc kubenswrapper[4806]: E1125 16:10:37.091243 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:10:52 crc kubenswrapper[4806]: I1125 16:10:52.088987 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:10:52 crc kubenswrapper[4806]: E1125 16:10:52.089741 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:11:05 crc kubenswrapper[4806]: I1125 16:11:05.090088 4806 scope.go:117] "RemoveContainer" containerID="977239e2892db141d26f0bbd911bcbc0cb11f0a0b79462f89887632c49d8d608" Nov 25 16:11:05 crc kubenswrapper[4806]: E1125 16:11:05.090885 4806 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kclf8_openshift-machine-config-operator(39baff20-1e9a-48b1-8872-155c5ad5931d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kclf8" podUID="39baff20-1e9a-48b1-8872-155c5ad5931d" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.051467 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7cb8"] Nov 25 16:11:09 crc kubenswrapper[4806]: E1125 16:11:09.054877 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="registry-server" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.054914 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="registry-server" Nov 25 16:11:09 crc kubenswrapper[4806]: E1125 16:11:09.054951 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="extract-content" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.054962 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="extract-content" Nov 25 16:11:09 crc kubenswrapper[4806]: E1125 16:11:09.054987 4806 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="extract-utilities" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.054994 4806 state_mem.go:107] "Deleted CPUSet assignment" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="extract-utilities" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.055218 4806 memory_manager.go:354] "RemoveStaleState removing state" podUID="26342135-ef05-4da9-ab2c-26cc8c83bfa5" containerName="registry-server" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.057031 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.074596 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7cb8"] Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.197103 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-784vm\" (UniqueName: \"kubernetes.io/projected/8f98859f-f343-4480-8a55-ec1b5c42b122-kube-api-access-784vm\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.197237 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-utilities\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.197495 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-catalog-content\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.299057 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-784vm\" (UniqueName: \"kubernetes.io/projected/8f98859f-f343-4480-8a55-ec1b5c42b122-kube-api-access-784vm\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.299121 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-utilities\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.299228 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-catalog-content\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.299770 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-catalog-content\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.299899 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f98859f-f343-4480-8a55-ec1b5c42b122-utilities\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.333357 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-784vm\" (UniqueName: \"kubernetes.io/projected/8f98859f-f343-4480-8a55-ec1b5c42b122-kube-api-access-784vm\") pod \"certified-operators-z7cb8\" (UID: \"8f98859f-f343-4480-8a55-ec1b5c42b122\") " pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.380642 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7cb8" Nov 25 16:11:09 crc kubenswrapper[4806]: I1125 16:11:09.904536 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7cb8"] Nov 25 16:11:10 crc kubenswrapper[4806]: I1125 16:11:10.039226 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7cb8" event={"ID":"8f98859f-f343-4480-8a55-ec1b5c42b122","Type":"ContainerStarted","Data":"f04bdbdde18865667af02fafe8a7bae01f305a642dfee07c83bf9bf7ec1560c7"} Nov 25 16:11:10 crc kubenswrapper[4806]: I1125 16:11:10.852458 4806 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rkw8r"] Nov 25 16:11:10 crc kubenswrapper[4806]: I1125 16:11:10.855067 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:10 crc kubenswrapper[4806]: I1125 16:11:10.863826 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkw8r"] Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.037101 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-catalog-content\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.037185 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-utilities\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.037363 4806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6pg8\" (UniqueName: \"kubernetes.io/projected/456e7062-fafb-4878-82e7-9e9334aa48f7-kube-api-access-g6pg8\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.050697 4806 generic.go:334] "Generic (PLEG): container finished" podID="8f98859f-f343-4480-8a55-ec1b5c42b122" containerID="ffb7f665e1c96845e409be07c0cd2105ad1a43f25a6d5915a12ecaf225d5445c" exitCode=0 Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.050779 4806 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7cb8" event={"ID":"8f98859f-f343-4480-8a55-ec1b5c42b122","Type":"ContainerDied","Data":"ffb7f665e1c96845e409be07c0cd2105ad1a43f25a6d5915a12ecaf225d5445c"} Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.139726 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-catalog-content\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.139850 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-utilities\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.139892 4806 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6pg8\" (UniqueName: \"kubernetes.io/projected/456e7062-fafb-4878-82e7-9e9334aa48f7-kube-api-access-g6pg8\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.140287 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-catalog-content\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.140710 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456e7062-fafb-4878-82e7-9e9334aa48f7-utilities\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.549569 4806 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6pg8\" (UniqueName: \"kubernetes.io/projected/456e7062-fafb-4878-82e7-9e9334aa48f7-kube-api-access-g6pg8\") pod \"redhat-operators-rkw8r\" (UID: \"456e7062-fafb-4878-82e7-9e9334aa48f7\") " pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:11 crc kubenswrapper[4806]: I1125 16:11:11.788157 4806 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rkw8r" Nov 25 16:11:12 crc kubenswrapper[4806]: I1125 16:11:12.310833 4806 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rkw8r"]